text
stringlengths 286
572k
| score
float64 0.8
0.98
| model_output
float64 3
4.39
|
---|---|---|
When galaxies form new stars, they sometimes do so in frantic episodes of activity known as starbursts. These events were commonplace in the early Universe, but are rarer in nearby galaxies.
During these bursts, hundreds of millions of stars are born, and their combined effect can drive a powerful wind that travels out of the galaxy. These winds were known to affect their host galaxy — but this new research now shows that they have a significantly greater effect than previously thought.
An international team of astronomers observed 20 nearby galaxies, some of which were known to be undergoing a starburst. They found that the winds accompanying these star formation processes were capable of ionising gas up to 650 000 light-years from the galactic centre — around twenty times further out than the visible size of the galaxy. This is the first direct observational evidence of local starbursts impacting the bulk of the gas around their host galaxy, and has important consequences for how that galaxy continues to evolve and form stars.
“The extended material around galaxies is hard to study, as it’s so faint,” says team member Vivienne Wild of the University of St. Andrews. “But it’s important — these envelopes of cool gas hold vital clues about how galaxies grow, process mass and energy, and finally die. We’re exploring a new frontier in galaxy evolution!”
The team used the Cosmic Origins Spectrograph (COS) instrument on the NASA/ESA Hubble Space Telescope to analyse light from a mixed sample of starburst and control galaxies. They were able to probe these faint envelopes by exploiting even more distant objects — quasars, the intensely luminous centres of distant galaxies powered by huge black holes. By analysing the light from these quasars after it passed through the foreground galaxies, the team could probe the galaxies themselves.
“Hubble is the only observatory that can carry out the observations necessary for a study like this,” says lead author Sanchayeeta Borthakur, of Johns Hopkins University. “We needed a space-based telescope to probe the hot gas, and the only instrument capable of measuring the extended envelopes of galaxies is COS.”
The starburst galaxies within the sample were seen to have large amounts of highly ionised gas in their halos — but the galaxies that were not undergoing a starburst did not. The team found that this ionisation was caused by the energetic winds created alongside newly forming stars.
This has consequences for the future of the galaxies hosting the starbursts. Galaxies grow by accreting gas from the space surrounding them, and converting this gas into stars. As these winds ionise the future fuel reservoir of gas in the galaxy’s envelope, the availability of cool gas falls — regulating any future star formation.
“Starbursts are important phenomena — they not only dictate the future evolution of a single galaxy, but also influence the cycle of matter and energy in the Universe as a whole,” says team member Timothy Heckman, of Johns Hopkins University. “The envelopes of galaxies are the interface between galaxies and the rest of the Universe — and we’re just beginning to fully explore the processes at work within them.” | 0.838452 | 4.186362 |
While we stay here on Earth turning upward in awe at how magnificent our star, the Sun, is, space experts have quite recently found a steady planet inside a triple-star framework that makes our Solar System appear to be fairly exhausting.
There are a ton of things outside our ability to understand and a unique little something is space and its incomprehensible cluster of planets, worlds and a great deal more. Space experts are dependably vigilant for something extraordinary and they have found another stable planet inside a triple star framework. This new planet revelation has taken enthusiasm off of our nearby planetary group for the time being.
A group of space experts working at the Harvard-Smithsonian Center for Astrophysics have quite recently found a steady planet inside in a three-star framework.
Cutting-edge, there are more than 1900 affirmed exoplanets (planets outside our nearby planetary group circling stars other than the sun) and 4696 kepler’ planets hopefuls. A significant number of these planets are multi-planets framework, with two stars yet finding a planet with three stars is unbelievably uncommon, just four framework with three stars were found. What’s more, this new one framework is the nearest of the four to us, giving us the chance to a superior look than has been conceivable with alternate finds.
The newfound planets, named KELT-4Ab, is a hot Jupiter (planets whose attributes are like Jupiter, yet that have high surface temperatures) circling nearly — at regular intervals — to its host star KELT-A, while the star is hovered by an adjacent pair of stars, an extremely uncommon multi-star framework found almost 700 light-years from us.
The present comprehension of the framework expresses that there are three stars: KELT-A, KELT-B and, you presumably got it, KELT-C.
KELT-4Ab, the planet, circles KELT-An about once at regular intervals. In the interim, KELT-B and KELT-C circle each other once at regular intervals or thereabouts, and lie far from KELT-4Ab.
For the new finding, the group utilized two mechanical telescopes that make up the Kilodegree Extremely Little Telescope (KELT) to witness the star framework. Specialists have thought about KELT framework for some time yet they thought it contained just a solitary star. After further research, they increased it to a parallel framework. What’s more, now, it is by all accounts a triple star framework.
You can read the full report in The Astronomical Journal. | 0.89287 | 3.732892 |
At the end of the XIX, it was known that the elements were made of one atom, unbreakable and different for each element. The masses of the atoms were known for several elements but their composition was still a mystery.
Michaël Faraday discovered that atoms were in fact composed of charged species, even if they are electrically neutral. His discovery was the result of an experiment in which a current passes through silver electrodes sunk in a solution containing silver (AgNO3). When the current passes, the mass of the electrode increased significantly. The silver ions in the solutions reacted with the electrons from the current to form solid silver gathering on the electrode.
This reaction showed that atoms contain positively charged elements, and therefore negatively charged species to neutralise the charge of the atom.
Joshep John Thompson proved the existence of electrons in 1897 during his works on cathodic tubes. Those tubes only contain void, a cathode and an anode. If the cathode is heated, a current is detected between the electrodes. The heating determines the kinetic energy transmitted to the atom of the cathode:
While the charge e of the electrons is given by the current:
Applying a magnetic field H on the cathodic tube, if only a small gap allowed electrons to reach the anode, no current was observed: the electron beam deviates from its normal trajectory depending on the ratio mass/charge. The walls of the gap were covered by ZnS, a fluorescent species to detect the deflection of the beam (radius of deflection r):
As a result, the ratio e/m of an electron was determined:
J.J. Thompson imagined a model of a spherical atom wherein a sea of charged species are moving (the plum pudding model).
The charge of the electron was determined by Robert Andrews Millikan. He beamed RX on droplets of oil between two horizontal electrodes. The charged droplets are subjected to several forces: their own weight, the electrostatic force and the friction of the air (air has a given viscosity).
The weight of a droplet is given by
m being the mass of a given droplet, equal to its density multiplied by its volume, and g is the gravity. The electrostatic force is given by
where E is the electric field and q is the charge. The friction is given by
where η is the air viscosity and v is the speed of the droplet. In absence of electric field, the droplet should fall at a speed
resulting from the equations of W and FR. However in the electric field E, the speed of a droplet is affected:
Except q, all the terms of the right side of the equation are known and the speed of droplets was experimentally determined: a droplet is observed through a microscope to measure the time it takes to travel a given distance.
The result of the experiment was that the charge q was always a multiple of 1.602 10-19C, some droplets being several times charged.
The mass of an electron could thus be determined as well: me=9.11 10-31kg.
Despite his brilliant experiment, the plum pudding model of J.J Thompson was not correct and got refuted by Ernest Rutherford (who actually has been one of his student), the pioneer of the nuclear chemistry. Rutherford studied the emission of α particles from Uranium. α particles are the equivalent of the nucleus of Helium atoms: 2 neutrons and 2 protons: He2+. α particles were beamed towards a thin foil of gold. Considering the model of Thompson, all of the radiation should have passed through the gold foil. 99% of it passed, and it was not due to an experimental error. Some of the α particles were deflected in all directions. He concluded that the 1% deflected on a solid aggregation with an intense positive charge and composing the majority of the mass of the element. The rest of the volume of an atom being filled by empty space and a cloud of electrons. The size of an atom is about 1 angström (1Å)=10-10m of diameter and the size of a nucleus is 10-15m.
Rutherford was also the first to transmute an element into another one. He did that by bombarding pure nitrogen by α particles and obtained oxygen and hydrogen nuclei (protons were not yet known). He assumed that hydrogen nuclei are part of the solid nucleus of the atoms. From now, atoms were no more unbreakable.
Later, Rutherford theorized the existence of neutrons to keep the positively charged nucleus in one piece, reducing the repelling of protons and giving a cohesion energy to the nucleus. The bigger the atom, the more neutrons are needed (in proportion with protons).
This energy of cohesion is actually enormous. For example Oxygen is made of 8 neutrons and 8 protons. But the mass of the atom of oxygen is smaller than the addition of the mass of the separated protons and neutrons:
The mass of an oxygen atom is 2.65535.10-23g. The difference is about 2.269.10-25g by atom. As E=mc2, we obtain for one mole an energy of cohesion of 1.23.1013J/mol of oxygen. For comparison, a typical chemical reaction is ~105J/mol. There is thus no surprise why nuclear reactions car produce so much energy.
Nuclear chemistry is a very specific domain of chemistry. It is one domain where the Lavoisier rule does not apply: the elements are not conserved and mass can be converted into energy. Yet, several types of reactions can be sorted. But first, we will take a look at how to write the isotopes involved. We have seen that each chemical elements is an atom with a specific composition of protons, electrons and neutrons. This composition gives the sepcific properties of each element. However, there can be several forms for many elements. Those forms are the isotopes. They differ by their number of neutrons, the number of electrons and of protons being equal and fixed for every element. The chemical properties of isotopes are almost identical (because given by the electrons), but some physical properties can be different between isotopes of the same element. The speed of reaction and the ebullition temperature are two examples of properties that change depending of the isotope. The isotope 238 of Uranium is written
U is the symbol of the element, the mass is written at the top left of the element and its atomic number Z is written at the bottom left. The atomic number can be skipped. The proportions the isotopes of a single element are not equal. For instance, the carbon has 2 stable isotopes: 12C and 13C with a proportion of 98.93% and 1.07%. 14C is an isotope of the carbon but it is not stable: it decays over time. Historians use this property to date ancient items or bodies.
- Production of α particles
Remember that an
The decay of Uranium 238 is an example of reaction producing α particles:
The total value of the top number is conserved during the reaction. The same is true for the bottom number. The γ product is gamma radiation, produced by radioactive decays because the formed nucleus is generally in an exited mode. To come back to its base state, energy is thus freed in the form of an electromagnetic radiation
- Production of β particles
The β particles are small charged species emitted during some nuclear reactions. The Thorium 234 obtained previously can produce this kind of particles
The mass of the element did not change during this reaction. However, the element changed from Thorium to Protactinium. One neutron became a proton. The β particles produced during this reaction is an electron rejected from the neutron to become a proton.
An antineutrino is also generated. A nucleus has a given spin depending on its charge: it turns on itself in a given direction. Electrons also have a spin. During the nuclear reaction from above, the charge of the nucleus changed and an antineutrino is liberated to obtain the correct spin.
A second β particle can be obtained, for example during the decay of Sodium 22
The β particle is not an electron but a positron. A neutrino is also obtained during this reaction. Neutrinos and antineutrinos are radiations that can go through everything.
Electrons and positrons can neutralize to produce a gamma radiation
Fission is done by bombarding an isotope with neutrons. In nuclear plants, fission is done on Uranium 235
There is more neutrons produced by the reaction than needed to launch it. This reaction can thus start again as long as there is Uranium 235 in the reactor. Another way to stop the reaction is to trap the neutrons with another isotope.
Fusion is done by merging two isotopes. For example, two isotopes of Hydrogen can produce Helium
Another way to obtain Helium is to bombard Hydrogen with electrons.
It is also a fusion reaction.
The elements used in fusion and in fission are different. The energy of cohesion is different for each element and we can observe a maximum of energy of cohesion/nucleon (proton+neutron) for 56Fe. Fusion is performed on elements of lower mass up to the 56Fe. Atoms gain energy of cohesion during fusion. On the other side, elements of higher mass lose cohesion energy and mass up to the 56Fe.
Radioactive elements do not stay active forever. The radioactivity decreases over time proportionally at the number or particles.
where v is the speed of decomposition, N the number of radioactive particles and k is a speed constant depending on the isotope.
Because it depends on the number of particles, the speed decreases over time. We can integer the speed equation
The half-life time is the time needed to decrease the population of a radioactive element by half:
This time does only depend on the isotope and does not depend on the population of the isotope.
1. Complete these nuclear reactions:
2. A piece of manuscript has been analysed for dating. It has been found that the ratio 14C/12C of the manuscript is equal to 0.802 times the value of a plant of today. Given that the half-life time of 14C is 5720years, what is the age of the manuscript?
3. How much energy is generated by the fusion of 1.2g of deuterium (D or 21H)
given that the masses of the two species are MD=2.0141g/mol and MHe=4.0026g/mol?
2. The half-life time of a radioactive element defines the time it takes to the element to decrease its population by half. The general formula to calculate the population of an isotope is
The half-life time is
It allows us to determine the value of k:
The ratio 14C/12C that we have in the wording is the reverse of N0/N(t). Now that we know the value of k, we can thus find the age of the manuscripts with the equation
3. In 1.2g of deuterium, there is 1.2g/2.0141g/mol=0.5958mol. We need two deuterium to form one atom of helium, so the amount of helium after the reaction is 0.2979mol.
The energy ΔE generated by the reaction is proportional to the mass Δm lost during the reaction. This mass lost for each produced mole of He is
Thus Δm=0.007626g for 1.2g of deuterium. The energy generated is given by
Let’s resume what we know from the atoms: atoms can be broken and are composed of charged species – protons and electrons – and neutral particles – the neutrons. Atoms are different for each element (or isotope) by the number of those three species. A nucleus is at the centre of the atom and is surrounded by void and a cloud of electrons. The nucleus is made of neutrons and protons and makes the major part of the mass of the atom. However, the spatial distribution of the electrons is not random. Several planetary orbit theories were proposed after the works of Rutherford on the atomic nucleus but it is the Bohr’s model which can be considered as the first viable model. Some of the hypotheses of this model were not correct but it was the first step toward the understanding of the structure of an atom.
Bohr worked on the emission of light from dihydrogen H2. In an electric environment H2 dissociates in 2 excited H* that emit light to get back to its fundamental level.
hν is the usual way to represent a photon, emitted or absorbed during a physical or chemical process. h=6.626×10−34Js is the Planck constant. The emitted light has only a few selected wavelengths.
Based on that fact, he elaborated a model for the electronic structure of the atom. Unfortunately, the model only works for the Hydrogen and for the cation of He, i.e. atoms with a single electron.
Bohr used 4 postulates:
- electrons revolve on circular orbitals
- On a given orbital, an electron does not lose energy
- Gains or losses of energy correspond to the jump from one orbital to another one.
- Electrons are subject to a angular moment L
Bohr calculated the radius and the energy of the orbitals: to stay on a given circular orbital, the electron endures a centrifugal force (left part of the equation), which is counterbalanced by the attraction of the nucleus (right part)
where ε0 is the permittivity of the medium. As mvr=nħ, we can simplify the equations from the square of it:
Injecting the centrifugal force into the previous equation, we obtain an expression for the radius of an electron:
The energy of an electron is the sum of its kinetic energy and its potential energy. The potential energy comes directly from the Coulomb’s Force (remember that an energy is a force multiplied by a distance)
The radius was determined previously and can be used to determine the energy of an electron for a given atom
R is the constant of Rydberg and R=-2.178 10-18J.
To jump to an orbital of greater number/radius, an electron requires a given amount of energy, obtained from heat or light. Doing so, the electron is said excited.
To go back to its fundamental state, the excited electron frees the same energy in the form of a light of a given wavelength. The wavelength of the light is directly related to the difference of energy between the orbitals:
This kind of transition, from an excited level to the fundamental one is called a Lyman transition and emits in the UV wavelengths. If the destination level is not the fundamental one, others names are given:
The second hypothesis of Bohr was incorrect and electrons interact together.
To determine the correct form of orbital and the position of electrons, it is first necessary to develop quantum mechanics.
An important principle of the quantum mechanics is the uncertainty principle of Heisenberg: it is impossible to determine at the same time the exact position and the exact movement of a particle such as an electron or a photon:
This relation means that we can only work with probabilities for an electron to be at a position. The electron is not considered as a point but as a stationary wave along its orbital.
The wave has a given number n (1, 2, 3,…) of fixed points, i.e. points where the wave crosses the theoretical position of the electron on its orbital. It is easier to explain this on a linear path but it works as well on a circular orbital. This number n will be useful later.
To determine the (probability of) position of an electron, we have to solve the equation of Schrödinger
Ĥ is the Hamiltonian operator and Ψ is the wave function, to be determined. To solve this equation, we use the particle in the box method: considering a square box of length L containing a particle, the probability to find the particle at a given place can be determined. In our case, the particle is an electron and the nucleus is in the centre of the box.
So, the relation to solve is
The fact that the second derivative of the function gives the function let us guess a sine or cosine function. The fact that it is negative indicates that it is a sine function. We will try a solution of the type
A and k will be determined later but do not depend on x. We can easily confirm our previous guess now:
As A does not depend on x, it can be placed out of the derivative without problem, and simplified with the other side of the equation
Giving us a relation for E
The values of k and of A will now be determined from the boundaries conditions of the box:
- The electron is not on the boundaries
- The electron is in the box. Its probability of presence Ψ2
From the first relation, we find the value of k:
Taking this into account, the second relation gives the value of A:
The integral of the sine equals 1, so the value of A is simply
The wave function is thus
The probability to find an electron at a given place is Ψ2(x):
As a result, for n=1, the probability to find an electron is the greatest at the centre of the box, i.e. close to the nucleus. For n=2, there is a fixed point at the centre of the box, meaning that the probability to be near the nucleus is null.
However, the volume near the nucleus is very limited and this parameter has to be taken in account. We use thus a radial probability of presence: Ψ2(x)4πr2 to obtain the distribution of electrons.
Depending on n, the (probability of) position of an electron can thus vary. For an increased n, the distance of the electrons increases but become closer and closer to each other (remember the figures of the Bohr model).
We won’t develop much more develop quantum mechanics in this section. One more notion is however needed to determine the number of electrons in the different orbitals: the quantum numbers.
There are 4 quantum numbers (QN)
- Principal QN: n=1, 2, 3,… this number defines the size and the energy of the orbital
- QN of angular moment: l.
L goes from 0 to n-1. Depending on the value of l, the orbitals are named s, p, d or f. The shapes of these orbitals are different:
- Orbital s
- Orbital p
Orbitals p are axial
- Orbital d
Orbital d are essentially biaxial. Two of them are on the axes and the three other are at 45° between axes
- Orbital f
Orbital f are polyaxial (and won’t be drawn here, sorry)
- Magnetic QN:ml
ml goes from –l to l and defines the orientation of the orbital
- QN of spin: ms
ms defines the spin of the electrons. Electrons can go in two opposite directions and ms= -½ or ½.
Electrons must have a different set of QN: it is the Pauli’s principle of exclusion. That means that an orbital can only accept a given number of electrons. There can thus only be 2 electrons of opposite spin for a given set of the three other QN. When the two electrons are together, we said that they are paired.
For example, there can be 8 electrons for n=2: l can have a value of 0 or 1
For l=0, the corresponding orbital is the orbital 2s. ml=0 and ms can either be ½ or -½. There are thus 2 electrons of opposite spin in the 2s orbital.
For l=1, the corresponding orbital is 2p. Three ml values are possible (going from –l to l): ml=-1, ml=0 and ml=1 (corresponding in the three Cartesian directions x, y and z). Each of those can accept an electron of spin ½ and -½. The orbital 2p can thus possess 6 electrons.
In total, 8 electrons can be placed for n=2.
If n=3, 10 additional electrons can be placed in the d orbital, for a total of 18.
All the orbitals don’t have the same energy. And electrons will first take place in the lower energy orbitals. Indeed, the external electrons are on more energetic orbitals: the inner sheets of electrons are shielding the charge of the nucleus and also have a repulsion effect on the external electrons.
The energy of orbital depends on n
But also on l
However, the global classification is more complicated than just all the orbitals of n=1, then of n=2, etc…
There is a method to remember the order in which we place the electrons on orbitals:
The electrons are placed following the arrows. 2 electrons can be placed on ns orbitals, 6 on np orbitals, 10 on nd orbitals and 14 on nf orbitals. This number is indicated on the figure on the top right for each set (n,l) of orbital. Two electrons are first placed on the 1s orbital, each with a different spin. The same is done in the orbital 2s. It takes less energy to place two paired electrons on the orbital 1s than one in the 1s and 2s orbitals. However in a same set of (n,l) it takes less energy to place one electron on each individual orbital. So, there are three orbitals in the 2p: 2px, 2py and 2pz. One electron is thus placed on each of those three orbitals and then electrons are paired.
There are some exceptions in this method due to the particular stability of d orbitals. d orbitals are the outer orbitals of the metallic atoms of transition. The d orbitals are very stable when they are complete or half complete, i.e. have 10 or 5 electrons. Electrons of lower orbitals can be displaced in the d orbitals to reach this quota. Let’s take a look on some metals to understand better this phenomenon.
The electronic configuration of V (Z=23) is
[Ar] 4s2 3d3
There is no need to write the complete configuration of atoms. The inner electrons don’t have any impact on the properties of the atom. Instead, we write the name of the previous noble gas between accolades. Argon (Ar) has Z=18 so there are still 5 electrons to place on orbitals. The first tow electrons are paired on the 4s and the three other are placed on 3 3d orbitals. Even if 4s has a bigger n, it is less energetic than 3d because of an effect of penetration (develop). There is nothing particular in this case.
The electronic configuration of Cr (Z=24), that has 1 more electron than V is
[Ar] 4s1 3d5
One electron has been taken from the 4s orbital to reach the half completion of the d orbitals. For Z=25 (Manganese, Mn), the 4s orbital receives the additional electron. The same phenomenon occurs for the copper Cu (Z=29) to obtain a complete 3d orbital:
Ni: [Ar] 4s2 3d8
Cu: [Ar] 4s1 3d10
On the periodic table of Mendeleev, a line begins when we place a first electron in a ns orbital. However, this is not this way that the table was developed initially. We will look closely at this table in the next chapter, explaining its shape, the elements and the general properties that can directly be related to the place of the element in the table.
1. To which element corresponds this electronic structure:
1s2 2s2 2p4
[Ar] 4s2 3d6
2. What is the electronic configuration of Ar, Si, Cr, Nb, Al, F, Rb, Es?
1s2 2s2 2p4 oxygen
[Ar] 4s2 3d6 Fe
[Ne] 3s1 Na
Ar: [Ne] 3s2 3p6
Si: [Ne] 3s2 3p2
Cr: [Ar] 4s1 3d5
Nb: [Kr] 5s2 4d3
Al: [Ne] 3s2 3p1
F: 1s2 2s2 2p5
Rb: [Kr] 5s1
Es: [Rn] 7s2 5f11
The table of Mendeleev is also called the periodic table of the elements. More than a simple list of the existing atoms. Mendeleev sorted the elements with regard to their oxidation/reduction by O/H
R20 R0 R302 RH4 RH3 RH2 RH R
Where R is the element. The atoms of same oxidation were sorted by weight to obtain the different lines. Some holes were present in the table but it was assumed that the missing elements were not yet discovered. Scientists have been able to fulfil the periodic table by the artificial synthesis of those elements. In the table showed above, the elements from 110 to 118 are theoretical.
It may not look like, but the determination of this table was a huge improvement for the chemists. The table contains a gigantic number of information for each atom, but also shows some periodic trends.
There is a large number of different models for the periodic tables. Some of them only show the symbol, name, atomic number (Z) and weight of each element but many other information can be displayed for each element. An example follows for the Aluminium.
Its symbol is Al, its atomic number is 13 and it has a weight of 26.981538 units of atomic mass (which is equivalent to the molar mass, in g/mol). The weight of an element is the average of the mass of its isotopes, taking in account their proportions. The weight is the property that we seek most of the times on a Mendeleev table. The number of electrons by layer can be determined from Z but is often indicated on the periodic table. Aluminium Z=13, meaning that the layer n=1 is complete with 2 electrons, the layer n=2 is complete with 8 electrons (2 in 2s2 and 6 in 2p6) and that 3 electrons are to be distributed in the n=3 layer. The 2 first electrons are in 3s2 and the last electron is in a 2p orbital. From that, we can determine that Al is on the third line of the table (n=3), in the third column (3 electrons are in the last layer).
Aluminium has a unique oxidation state (OS) of +3 (to be seen in a next section). Elements after the third column can have several OS depending on how they use their electrons. Finally, some temperature are indicated to let us know in which state, solid, liquid or gaseous, the element is found at a given temperature.
Each line corresponds to a value of n. Because a layer of electrons is added for each line, the radius of the elements increases when we are descending in the periodic table. The number of elements in a line is not identical from line to line, because the orbitals p and d are not present in the first lines. One can also see two particular lines at the bottom of the table, the lanthanides and the actinides, corresponding at the elements possessing an outer nf orbital.
The elements on a same column have the same number of electrons of valence. Each column is called a family and has a particular name. The three first columns and the 5 last are denoted Ia, IIa, IIIa,…,VIIIa and contain the main elements. In between are the transition metals, noted Ib, IIb,…
Ia (except H): alkali metals: they are all shiny (but tarnish in the contact of air), soft, highly reactive metals (in sctp, standart conditions of temperature and pressure) and readily lose their outermost electron to form a cation. They form strong bases when bound to OH– (NaOH,KOH).
IIa: alkaline earth metals: they are all shiny and reactive (in sctp). They readily lose their outermost electrons to form a cation of charge 2+.
IIIa: icosagens or triels: the column of Boron and Aluminium. They have 3 electrons in their outermost layer. Aluminium is one of the rare metals to have a low density.
IVa: crystallogens or tetrels: the group of the carbon and of the silicon. They have 4 valence electrons. Carbon is the essential component of living bodies (~23% of a human) but it is also a constituent of the earth through carbonates, and of the atmosphere through CO2. It is very resistant to heat. Silicon is a major constituent of the earth, second in abundance there. Si and C are in the same column but are surprisingly different. One of their common properties is that they can make long chains (C can make chains much longer than Si though). C can form pi liaisons while Si cannot because its radius is too large (no possible superimposition of the orbitals). CO2 is a gas while SiO2 is a solid (quartz) that is the base of the glass materials.
Va: pnictogens: the group of the nitrogen and of the phosphorus. 5 valence electrons, with two paired. They form stable covalent liaisons and can form double and triple liaisons. This ability to form persistent liaisons is the source of the toxicity of some elements of this group (arsenic, antimony). On the other hand, N2 is an inert gas which represents 78% of the air.
VIa: chalcogens: The group of the oxygen and of the sulphur. Oxygen has very different properties than the rest of this group: they are soft and do not conduct heat well. Oxygen makes up 21% of the atmosphere by weight, 46% of the earth’s crust by weight and 65% of the human body. Oxygen also occurs in many minerals, being found in all oxide minerals and hydroxide minerals, and in numerous other minerals. Ozone is spontaneously formed in the high atmosphere where it catches UV rays from the sun.
The emitted radical will react immediately with an adjacent molecule. The ozone is a better oxidant than O2 because a pi liaison is delocalised. As a result, ozone is often used to kill bacteria’s without waste.
VIIa: Halogens: All the halogens form acids when bound to a hydrogen and are generally toxic. They also from salts when bound to alkali’s.
VIIIa: Noble gas/Rare gas: those element are inert: they do not react with any other element. Helium is the most common element in the universe (~24% of its mass). Because of their lack of reactivity, there are used in lighting (also true for nitrogen)
Metals, metalloids, nonmetals
The periodic table can be divided in three broad sections: metals, metalloids, nonmetals. These three categories are well regrouped in the table (except H, as always). Most of the elements are metals. The nonmetals are on the top right corner of the table (+H) and metalloids are only the elements on a diagonal from Boron to Polonium.
Metals have good thermal and electrical conductivity. As a broad category, they have common properties but exceptions exist for several elements for one or more properties.
They have in general a low ionisation energy, a low electronegativity and give or share their electrons when bonding. The bonding of two or more metals forms an alloy. Most of them can form oxides and are naturally found in this state.
In general, they are soft and malleable solids of high density but some are liquids in sctp (Hg for example). Most of them are silvery coloured.
Most of nonmetals are low density gases in sctp. When bonding, they share or accept electrons and do not form basic oxides (but well acidic oxides as HClO, H2SO4, etc). In opposition to metals, they do not conduct electricity or heat well, and have a high ionisation energy and electronegativity. Usually, they are not naturally found in a combined state.
They are the minority of the elements. They look like metallic solids and can form oxides (acidic, basic or amphoteric). Most are semiconductors, and moderate thermal conductors, and have structures that are more open than those of most metals.
Several kinds of radius can be determined for a single element.
Crystalline radius: a crystal is a solid in which the atoms are spatially arranged. Monocrystals are obtained from the aggregation of atoms on one unique atom. They have the advantage of having less defaults in their structure than crystals arranged around several sources. Crystals are not necessarily a structure made of a single element. It is for example usual to make crystals from proteins to determine their spatial structure.
The method to obtain crystals is as follows: the element of the crystal is first dissolved in a solvent. The dissolution can be helped by an increase of the temperature. Once the dissolution is complete, the goal is to decrease very slowly the affinity of the element with the solvent. It can be done through the evaporation of the solvent (another solvent of lower affinity replaces the evaporated solvent) or if the solution was heated, through a slow decrease of the temperature. The process has to be slow to avoid the formation of several centres of aggregation.
The structure of the crystals and the distance between atoms inside a crystal depends on its constitutive elements and can be determined by RX diffraction.
In this method, a RX beam hits the surface of a crystal with a given angle θ.
The interaction of this beam with the atoms of the first layers of the crystal. This interaction gives place to diffraction only if
Where d is the distance between layers in the crystal, θ the angle of incidence and λ the wavelength of the beam. This equation is the equation of Bragg. If this condition is not fulfilled, the interaction between the atoms and the beam are destructive (i.e. they are cancelling each other) and no diffraction occurs. The radius is the half of the distance between atoms in the crystal.
Calculated radius: we consider here that the radius for which the probability of presence of the electron is the largest is the radius of the atom.
Van der Waals radius: The van der Waals radius is obtained by the addition of the repulsive force and the attractive force between two atoms. These two forces depend on the distance separating the atoms but not the same way. The attraction is proportional to the radius power -7 while the repulsion on the radius power -13. As a result, there is a favoured distance between the atoms when they bind. It is the attraction pit. Reducing this distance increases drastically the potential energy. σ is the distance of smallest approach and is the crossing between the curve and Epot=0. Increasing the distance also increases the potential energy towards 0.
The radius of the atom can be determined from σ.
Covalent radius: for diatomic molecules, the radius of the atom is considered as the half of the liaison’s length.
Empirical radius: it is obtained from the volume of the atomic gas. In a given volume of gas, there is a given number of atoms. Considering spherical atoms, the volume of one atom is V=4/3π r3. Therefore the radius is determined.
Those different radius are not equals and in some case they do not exist. For example, there is no covalent radius for He.
However, each of them follows the same logical repartition in the periodic table:
As we have seen in the previous section, a layer of electrons is added at each new line of the periodic table. It is thus logical that the radius of the elements increases in a single column. Moving across a line, the number of protons in the nucleus increases and the outer electrons are thus more and more attracted by the nucleus. From left to right, the radius decreases. As a result, Li and Mg have a similar radius, as do Na and Ca.
The ionisation energy (IE) is the energy required to remove one electron from an atom of a gas.
This energy is positive (atoms in gas don’t lose their electrons spontaneously) and depends on the energy of the electron: an electron close to the nucleus is hard to remove. It is also way more difficult to remove a second electron from an atom which had already lose one. For example, the aluminium has the following ionisation energies:
The first electron is relatively easy to remove: it is the single electron on the orbital 3p of the aluminium. The next electron is way more difficult to remove: it is an electron on a complete orbital. The third energy of ionisation is small for a third ionisation: we obtain the electronic structure of the noble gas Neon: the octet is complete and this structure is stabilized. This stabilisation is visible in the next energy of ionisation: a gigantic energy is required to remove one additional electron.
The energies of ionisation are periodic on the Mendeleev table. It requires a massive energy to remove an electron from a noble gas but it is easier to go towards its electronic structure. In one period/line, one can see that the trend of the IE is not linear. It is more difficult than average to remove an electron from a complete orbital
It is easier to remove one paired electron (less repulsion)
It is easier to empty one orbital: the last electron is shielded by the electrons of inner orbitals.
For the same last reason, the IE decreases when moving downwards in the periodic table: the shielding effect is bigger and bigger with the number of layers of electrons.
The electroaffinity is just the opposite of the ionisation: it is the energy required by a gaseous atom to accept an additional electron and follow the opposite evolution than the IE.
Note that noble gases don’t have any electroaffinity because they are inert.
The electronegativity is an important notion in chemistry: it is the ability of an atom to attract the electrons of a liaison it shares. The symbol for the electronegativity is χ.
Pauling determined that, by convention the fluor, being the atom of largest electronegativity, has χ =4. The other elements have an electronegativity given by
Where DXY is the energy of dissociation between X and Y, i.e. the required energy to break a liaison between X and Y.
The fluor being in the top right corner of the table, it is not difficult to understand the trend for the electronegativity.
- What is the symbol, atomic number and weight of the following species?
Chlorine, silver, sodium, carbon, argon, neon, cerium, magnesium, oxygen, iron, tin, antimony
- Name the following species:
O2, NaCl, 10n, HNO3, SnBr4, 21H, P2O5, H2S, HClO2, CO2, HClO3, HNO2, HClO4.
- What is the molar weight of the above cited species?
- What is the formula of
- How much perchlorate of calcium do we have to weight if we want 50 moles of it?
- How many molecules of AgI is there is 20g?
- Which element has a higher electronegativity?
- F or I
- Na or Cl
- Mg or Ar
- Which element has a higher energy of ionisation?
- I or Br
- C or F
- P or Na
- Which element has a larger radius?
- Al or S
- Li or O
- Sn or Rb
- Chlorine has two stable isotopes: 35Cl and 37 Their masses are 34.96885 and 36.96590. What are is the proportion of both of the isotopes knowing that the atomic mass of the chlorine is 35.4527?
- The mass of 0.1726mol of an acid HXO4 is 25g. Determine the element behind X.
- How many valence electron in halogens? In earth-alkalines?
- What are the names of those reactions?
- What is the symbol, atomic number and weight of the following species?
Chlorine, silver, sodium, carbon, argon, neon, cerium, magnesium, oxygen, iron, tin, antimony
- Name the following species:
NaCl: Sodium chloride
HNO3: nitric acid
SnBr4: tin (IV) bromide
P2O5: phosphorus pentoxide
H2S: hydrogen sulphide
HClO2: chlorous acid
CO2: carbon dioxide
HClO3: chloric acid
HNO2,: nitrous acid
HClO4: perchloric acid
- What is the molar weight of the above cited species?
P2O5: 141.95 g/mol
H2S: 34.08 g/mol
HClO2: 68.46 g/mol
CO2: 44.01 g/mol
HClO3: 84.46 g/mol
HNO2,: 17.18 g/mol
HClO4: 100.46 g/mol
- What is the formula of
Carbon monoxide: CO
phosphorous acid: H3PO3
sodium bromide: NaBr
dinitrogen trioxide: N2O3
hydrogen peroxide: H2O2
- How much perchlorate of calcium do we weight if we want 50 moles of it?Formula of the perchlorate of calcium: Ca(ClO4)2Its molar mass is: M=40.078g/mol+ 2x(35.4527g/mol + 4×15.9994g/mol) =238.9786g/molThus, 50mol of it weight 11.948kg.
- How many molecules of AgI is there is 20g?1 mol of AgI weights 234.77267g. There are thus 0.085mol of AgI in 20g. The question is how many molecules are in 20g of AgI. In one mole, there are NA (the Avogadro number=6.022 1023) molecules. That makes 5.13 1022 molecules.
- Which element has a higher electronegativity?
- F or I
- Na or Cl
- Mg or Ar (Ar is a noble gas, so it does not accept any additional electron)
- Which element has a higher energy of ionisation?
- I or Br
- C or F
- P or Na
- Which element has a larger radius?
- Al or N
- Li or O
- Sn or Rb
- Chlorine has two stable isotopes: 35Cl and 37 Their masses are 34.96885 and 36.96590. What are is the proportion of both of the isotopes knowing that the atomic mass of the chlorine is 35.4527?
To solve this, we have to consider 2 things:
- The atomic mass of an element is the average of the masses of its stable isotopes. So 35.4527=x 34.96885 + y 36.96590 where x and y are the proportions of each isotope.
- The sum of the proportions of the isotopes is 100%: 1 (100%)= x + y.
We have thus two equations with 2 unknowns. That can easily be solved:
The proportions are thus 24.23% of 37Cl and 75.77% of 35Cl.
11.We determine first the molar weight of the acid:
The mass of X is the molar weight of the acid minus the known atoms:
X is thus the bromine Br.
12. 7 in halogens, 2 in earth-alkalines
Oxido-reductions, or redox, are reactions involving a transfer of charge between molecules. During such reactions, some chemical energy is transformed into electrical energy.
An oxidation reaction is a reaction during which a substrate (molecule, atom or ion) loses electrons.
A reduction reaction is a reaction during which a substrate gains electrons.
An oxidant is a substrate with the ability to oxidize other substances. During this process, the oxidant is thus reduced. They are also called oxidizer or oxidizing agent.
A reductant is a substrate with the ability to reduce other substances. During this process, the reductant is thus oxidized. They are also called reducer or reducing agent.
The state of oxidation
The state of oxidation (SO) is an integer the value of which is the charge of an atom if we break all its liaisons. In O2, each atom takes its electrons back (it is a homolytic cleavage or homolysis). The global charge of O2 is neutral. Thus the SO of each oxygen is 0. However, in H2O, the oxygen takes the electrons from the hydrogen’s when we break the liaisons because the oxygen is more electronegative than the hydrogen (it is a heterolytic cleavage or heterolysis). The SO of oxygen in H2O is -2 and the SO of the hydrogen’s is +1.
In general, the SO of the oxygen is -2 and we can find the SO of the other atoms of the molecule without the full representation of the molecule. For example, we can determine the state of oxidation of the manganese in MnO4–: the global charge is -1 and each oxygen has a SO of -2. The SO of Mn is thus +7 so that the global charge equals the sum of the states of oxidation -1=+7-(4x-2). In heavy water, H2O2, the SO of the oxygen’s is -1.
An atom can thus have several possible SO. If several SO are present for one atom in a single molecule, we take the average between them. For example, S has a SO of +2.5 in S4O62-. Sulphur can have a SO from 0 (solid sulphur) to +6 (H2SO4).
So, in a redox reaction, an oxidant oxidizes a reductant while the reductant reduces the oxidant. In presence of two compounds, it is however not always obvious to determine the direction of the reaction
This reaction involves two half reactions:
The reaction will go in the direction that has a negative enthalpy. In the present case, the reaction goes from left to right and it can be explained because Cu is more electronegative than Zn. It is thus harder to take its electron from Cu than from Zn.
It is possible to determine the strength of the oxidants and reductants from their ability to attract electrons in a battery, i.e. their standard potential ε0 (in volts). However, an absolute value of the standard potential is not measured. We can only know the value of ε0 with regard to another couple. We use the couple H+/H2 as a reference, with its standard potential set to ε0=0.000V in sctp by convention. The name of a couple is put in the direction of the reduction: oxidant/reductant. For example: Cu2+/Cu, Zn2+/Zn.
To establish a redox reaction correctly, a method exists:
First, we write the supposed equation without stoechiometry, protons, water or OH–. Just the oxidants and reductants. We determine their state of oxidation
Second, we determine the individual standard potentials of each couple. The values of ε0 can be found for a large range of compounds on the back of Mendeleev tables. It is given for the reduction reactions (for the oxidation, take the negative value). For the iron, the reaction is
For the manganese, 5 electrons are added to obtain Mn2+.
Be careful to consider how many atoms are reduced/oxidized. In the couple Cr2O72-/Cr3+, Cr has a SO of +6→+3 but 6 electrons have to be added in the reaction because two Cr are reduced.
Coming back to the problem involving manganese, the reaction is not balanced yet. Oxygen’s are also involved in the process. We balance the reaction with water molecules, protons or OH– depending on the acidity of the solution. To determine how many species are required, we count the charges at each side of the arrow:
There are 6- at the left because MnO4- is negatively charged. The difference of 8 charges is balanced by the addition of 8 protons at the left of the arrow and we balance the equation:
The standard potential of this reaction is known. Now, we put together the two half reactions. Don’t forget to consider that one half reaction introduces 5 electrons and the other one introduces 1 electrons. The equation of Fe3+/Fe2+ is thus multiplied by 5.
This final reaction is correctly balanced. Note that even if the half reaction Fe3+/Fe2+ is multiplied by 5, it is not multiplied in the determination of the standard potential. As Δε0>0, it means that the reactions goes from left to right. This reaction is spontaneous if the production of enthalpy ΔG0<0. The enthalpy of reaction is a measure of the work required to do the reaction. If this value is negative, the reaction is spontaneous and liberates energy, generally in the form of heat. The reaction is said to be exothermic. If this value is positive, it is necessary to give some energy to do the reaction. The reaction is endothermic as it absorbs heat from its surrounding.
The relation between the enthalpy of a reaction and its potential is
This relations comes from the fact that a potential V is the variation of work J to modify the charge Q
In batteries, the elements of a redox reactions are separated. The two solutions are connected by a salt bridge and two electrodes connected to a voltmeter.
A salt bridge is a device used to connect two half batteries, that is full of ionic species conducting electricity (electrolytes) but not interfering with the compounds of the battery. Without the salt bridge, one half cell would accumulate negative charge and the other half cell would accumulate positive charge as the reaction proceeds. The cations and anions of the half bridge are chosen so that they have a similar high conductivity. KCl, KNO3, NH4Cl are some examples of electrolytes composing a salt bridge.
A battery is written by convention as follows:
The electrodes are put at the extremities and the compounds of the two half batteries are separated by a double line symbolizing the salt bridge. The oxidation is put in first and the reduction in second.
The oxidation of Fe2+ is done at the electrode of platinum that catches the freed electron. This electron goes through the voltmeter to the other half battery where it is used in the reduction. The salt bridge closes the electric circuit.
The potential of a battery, also called electromotive force, is given by the Nermst equation and depends on the concentrations of the different species:
We can also obtain the potential from the difference between the two cells of the battery:
Be careful that the concentrations of Fe2+ and Fe3+ were inverted in the last equation because we consider the products and reactants of the half batteries in this case.
In a battery of concentration, the electromotive force is only given by the concentrations of the species:
The two sides of the battery have the same ε0 and the equation for the electromotive force is thus limited to
Here, we cannot use KCl as electrolyte because Cl– would react with Ag+ to form a precipitate of AgCl.
Electrodes of reference
An electrode of reference is an electrode the potential of which is known and does not varies during an experiment. The standard hydrogen electrode (SHE) is a first electrode of reference but is not often used because it is not entirely reproducible.
Dihydrogen gas is introduced in an acid solution of 1M with a pressure of 1atm.
The silver chloride electrode is used as electrode of reference
The potential of the electrode depends on the concentration of the ionic form of Ag.
However, this concentration also depends on the solubility of AgCl in water
The constant of dissolution of AgCl is KS=[Ag+][Cl–]. By saturating the solution with KCl (>3.6M), the concentration of chloride is constant, hence the fixed concentration of Ag+.
The potential written above is not totally correct because at large concentrations, the concentration is no more equal to the activity of an ion.
The saturated calomel electrode uses the same principle:
Here the mercury cations precipitate with chloride which is saturated the same way as for the silver chloride electrode
The reverse of a battery is the electrolyse: a current is applied in a cell to induce a reaction with a negative electromotive force. It is a way to depose metals from a solution.
The quantity of deposed metal depends on the applied current:
All the first column of the Mendeleev table are obtained by electrolyse. In the nature, they exist in their oxidised form because the world is oxidant: H2O and O2 are everywhere. The production of Na can be done from its salt by electrolyse:
This reaction is performed at high temperature (>600°C) and in total absence of water.
In some cases, a single species plays simultaneously the role of oxidant and reductant. A reaction involving such a process is called disproportionation. The salt of copper dissociates in water in Cu+ and Cl-. However solid copper is obtained during the process. It is a consequence of the disproportionation of copper:
As the second reaction has a larger potential, the global reactions forms solid copper:
The Cu+ dissociated from CuCl forms spontaneously Cu2+ and Cu(s).
Monoxide of carbon is the result of disproportionation between CO2 and C. It is a process responsible of many incidents, often lethal, in bathrooms with an insufficient ventilation. Traces of carbon, obtained from burned organic compounds, reacts with the carbon dioxide emitted by boilers or heaters. The problem is that CO takes the place of O2 on our blood cells and is much strongly bound to them than O2 (by a factor 200). Once CO is bound to a blood cell, O2 can difficulty bind, Because of that, only a small proportion of CO can be catastrophic. To treat people affected by CO, they are placed in a room overpressured with O2 to force the equilibrium and remove the carbon monoxide.
The standard potential of a reaction can be found from its intermediate reactions.
In total, 3 electrons are required to obtain the iron on its solid state. 1 electron is required for the first step and 2 for the second step. The (approximate) standard potential is found by a combination of the standard potential of each step, taking in account the required electrons for each step:
In total, the process has a negative standard potential. The reverse reaction is thus spontaneous in presence of an oxidant. It is the production of stain (hydrated Fe3+) from iron. On a car for example, the iron is protected by a thin layer of painting/coating, not allowing water to the contact of the iron. If there is a default on the coating, the iron is oxidised but the stain will not always appears at this location: the electrons freed by the oxidation can move through the metal.
To avoid this, surfaces are connected by an electric cable to a piece of zinc. The zinc will oxidise instead of the iron because its standard potential is smaller.
Stain is a porous material and the oxygen can pass through it to proceed further. ZnO is not porous so a monolayer of Zn can protect iron pieces from stain. It is the principle of stainless iron, but applied with Cr2O3.
It is often useful to take a look on the history of something to understand it. That is how we will begin our lessons about chemistry. As far as we can go, chemistry started with the discovery of fire, which is basically the combustion of a reactant to obtain heat from it. Later, different metals were discovered, giving the names of the iron, copper and bronze eras.
However, we can’t talk about a scientific method at this moment yet. It is more about evolution. Evolution is a process to adapt best to our environment, most of the time through a trials and errors process. Processes which lead to a better adaptation were repeated while the others were not. Fire for example gave a huge improvement to the life of men for now obvious reasons but yet the process was not deeply understood.
Rationalisation is first seen with the Egyptians (fabrication of glass, beer and coloring), China (porcelain) and then with the Greeks. It is Leucippe and then Democrite who describe the matter as composed of small unbreakable particles, the atomos. Greeks also claimed that the world is composed of 4 main elements: earth, water, air and fire. We could now compare those to the three main phases: solid, liquid, gaz and the energy
The scientific method was developed during the XVI century. The method consists in 3 steps:
- Observation of a phenomenon: gives quantitative and qualitative information
- Hypothesis: tries to give possible explanations to the observed phenomenon
- Experiments: gathers new information on the phenomenon, confirm or not the theories developed in the previous step.
Before that, men described what they saw. From that point, men try to explain what they see through theories.
Stoichiometry and determination of the atomic masses
One of the fathers of the modern chemistry is Lavoisier. The statement he is known for is “Nothing is lost, nothing is created, everything is transformed” which meant that the total mass of the product of a reaction equals the total mass of the reactants. This statement is indeed true except for nuclear reactions during which a part of the mass is converted into energy.
Joseph Proust stated that a chemical compound always contains exactly the same proportion of elements by mass. For example, in pure water, the mass of hydrogen is always 1/9 of the mass of the sample while the oxygen makes up the 8/9 of the mass.
To complete this law, Dalton observed that during a reaction, the masses of the compounds which react together are always in a relation of simple integers. For example oxygen (O) and carbon (C) can react together in several ways
1g of C + 1.33g of O → 2.33g of CO
1g of C + 2.66g of O → 3.66g of CO2
to form carbon monoxide or carbon dioxide. The relationship between the mass of oxygen is 1 to 2. This is the basis of the stoichiometry. Bertholet protested against that law because one of his experiments gave opposite results. This experiment involved a solid of CuO wherein the ratio between Cu and O is neither constant nor a simple integer. The reason is that solids may have imperfections. Basically, these imperfections can be empty spaces or atoms replaced by others. This is why Bertholet obtained a formula of Cu1-xO instead of CuO.
Dalton established an atomic theory:
- All matter is made of atoms. Atoms are indivisible and indestructible.
- All atoms of a given element are identical in mass and properties. Atoms of different elements are different.
- Compounds are formed by a combination of two or more atoms. There is no formation of new atom (except nuclear reactions).
- A chemical reaction is a rearrangement of atoms.
The mass of each element has been determined. First works were performed by Cannizzaro, basing its experiment on a principle enounced by Avogadro: In normal conditions of temperature and pressure, identical volumes of gas have the same number of particles. Knowing the proportion of carbon there is in different gases, Cannizzaro determined its mass:
|Compound||Mass (g)||% of carbon||Mass of carbon (g)|
The mass of C was determined this way. C has a mass of 12 atomic mass units (u). Consecutively, the mass of oxygen (16) has been determined from carbon dioxide (CO2). And so on. Initially, some errors occurred, typically because of elements with an even mass. For example, it was known that 2g of H react with 16g of O to form 18g of water. Considering the simplest relation, H has a mass of 1u (which is correct) but O would have a mass of 8u.
Moles and the Avogadro’s Number
The mole is one of the seven units of the International System of Units (SI Units): kilogram for mass, meter for length, second for time, Kelvin for temperature, ampere for electric current, candela for luminous intensity and mole for the amount of substance. The symbol for mole is mol.
Coming back to Avogadro, one of the most important numbers in chemistry, but almost never used is the Avogadro’s Number NA. As atoms are unbreakable, there are obviously several atoms in 12g of C. A mole expresses the number of atoms of carbon in 12g of carbon.
This relation is true for any element i. Mi is the molar mass of i, i.e. the mass of one mole of the element i. Its units is g/mol (or g mol-1). mi is the mass of one atom of the element i. In the case of the carbon, MC=12g mol-1. mC being a mass, NA unit is mol-1. The value of NA was initially determined by Johann Josef Loschmidt who calculated the number of particles in a given volume of gas. The accuracy of the measure was perfectible and there are now experiments which give more accurate results than this method.
NA= 6.02214129(27)×1023 mol−1
Ourself and our environment is thus filled by an amazingly large number of atoms that interact together to form matter, air, liquids and most importantly life. The idea that molecules from the living could be crafted was not accepted before the XIX century. Friedrich Wöhler, a german chemist, can be considered as a pioneer of the organic chemistry. At this time, the hypothesis of vitalism was popular: any compound, to be living, needs a vital force given by God. Humans should not be able to synthesize, without this vital force, any organic compound. Wöhler proved that this theory was wrong by producing urea, accidentally, from inorganic substances. Even if urea is a waste of our body (when we pee), it is an organic compound and it should have been impossible to Wöhler to synthesize it without the intervention of the vital force from a living species. Friedrich Wöhler wanted to produce ammonium cyanate from potassium cyanate (KNCO) and ammonium chloride. However, the target product is unstable and acts only as an intermediary product, decomposing itself into urea.
The stoichiometry is the relation between the quantities of reactants and products during a chemical reaction.
A chemical reaction is written by an equation, placing the reactants on the left of an arrow and the products on the right of it. There can be several reactants that react together to form one single product. Each species is separated by a +.
In this reaction, hydrogen and oxygen are mixed to produce water (H2O). Hydrogen and oxygen are separated by a + at the left of the arrow because they are the reactants and the water is at the right of the arrow because it is the product of the reaction. Several products can be formed from one or more different reactants.
In this case, two products are generated by the chemical reaction: water and carbon dioxide. They are also separated by a + and are still at the right of the equation.
Now, the equations are not complete. We have to respect the law of conservation of mass (of Lavoisier): nothing is lost, nothing is created, everything is transformed. The quantities of an atom at the left and at the right of a chemical equation have to be identical. In the first equation, we wrote that one mole of H2 reacts with one mole of O2 to form one mole of H2O. The number of H is equal before and after the reaction (there are 2 of them in H2 and 2 in H2O) but one atom of oxygen would be lost. To obtain the correct equation, we put coefficients, called stoichiometric coefficients, before the species:
It is thus 2 moles of H2 that react with one mole oxygen to produce two moles of water. Here, the quantities of each atom is identical at each side of the arrow. This notation is also correct, as long as the numbers of atom are integers:
The second equation that we wrote was also incorrect:
The number of carbon C is correct, but the quantities of H and of O changed during the reaction. As there are 4 hydrogen at the left, we will put a coefficient 2 in front of the water to have 4 H at the right of the equation. Now, we have 4 oxygen’s in the products and only 2 in the reactants. To correct the equation, a coefficient 2 is put before the O2. The number of C is still correct and the equation is
There can be some variations to this notation. If a specific solvent is required for a reaction to happen, we indicate it above or under the arrow.
This reaction is the dissolution of salt (NaCl, also called table salt, the salt that we add on our food) into water. The elements of the salt are separated into the corresponding ions, i.e. charged species. The positively charged ions are called cations and the negatively charged species are called anions. If we need to heat the solution for the reaction to happen, we will also indicate it near the arrow by a Δ or a ΔT.
A reaction that requires heat to be made is an endothermic reaction. If the reaction generates heat, this reaction is said to be exothermic. Note that in the previous reaction, we indicated the states of the compounds between brackets. The g means that the species are gaseous. A s means that it is a solid, l stands for a liquid and aq stands for an aqueous solution. The heat generated by an exothermic reaction is written just as a product by Q or its exact value in kJ/mol if it is known. Some reactions produce light, also indicated as a product by hν, i.e. a photon of frequency ν.
The last point to talk about is that all the reactions are not complete. A complete reactions means that, if the reactants are put in stoichiometric proportions, all the reactants will be consumed during the reaction to form the products. If one reactant has more than the stoichiometric proportion, it is in excess and there will still be an amount of this reactant after the reaction, corresponding at the excess. During incomplete reactions, called reactions of equilibrium, all the reactants are not consumed even if they are put in stoichiometric proportions. There is an equilibrium between the quantities of reactants and of products of this reaction. Incomplete does not mean that the products are not fully made, but only that a part of them are generated. For example, the acetic acid is a low acid that does not completely dissolve in water.
As a result, we find the three species in solution: CH3COOH, CH3COO– and H+. Some of the reactants formed the products and some did not react. Note that the arrow in the chemical equation is different from the one of complete reactions. It is now two half arrows meaning that the reaction can go in both sides. For equilibrium reactions, we define a constant of equilibrium K such as
The means that we consider the concentrations of the species between the power their stoichiometric coefficient.
1. Equilibrate those equations
2. If we put together 2g of Br2 and 1g of H2, how many moles of HBr can be produced? What is the mass of the excess of reactant?
3. Write the general equation for the combustion of the organic compounds CxHy and CxHyOz
2. The reaction consumes 1 mole of each reactant to form 2 moles of HBr
As Br has the biggest molar mass, H2 will be in excess.
As a result, only 0.025mol of H2 is consumed and 0.05mol of HBr is produced by the reaction. The excess of H2 is 1.959molà1.975g of excess.
3. A combustion reaction is the reaction between one reactant and oxygen. From organic molecules, it generates CO2 and water.
In this module we will review one of the main types of reaction of chemistry. Reactions can indeed be classified in 3 major categories:
- Acid-base reactions
- Redox (reduction and oxidation) reactions
- Solubility reactions (dissolution and precipitation)
The two last reaction types are seen in other sections of our lessons. We will here focus on acid-base reactions. The first step will be to introduce the definitions of acid and basic compounds and the notion of acidity. We will see next the strength of different acids and bases and explain how to follow experimentally the neutralisation of an acid by a base.
A proton is a Hydrogen (H) atom that lost its electron (e–). Consequently, a proton consists only in one nucleus, which is positively charged. The proton is an ion: a charged molecule. We have here to consider that the nucleus of an atom is only a very small fraction of the volume of an atom (radius of ). Because of its small size, a proton can diffuse in everything and move through any material until its neutralization.
Several definitions were given to an acid and a base. Arrhenius proposed
- An acid is a donator of protons
- A base is a donator of OH–
This definition works well for several compounds, for example:
However, some basic compounds do not possess any OH group in them and can still neutralize acids. For example NH3 can react with H+ but cannot free any OH-. A tentative of explanation was to introduce NH4OH
but this compound does simply not exist.
Brønsted and Lowry proposed another theory:
- An acid is a donator of protons
- A base is an captor of protons
When a base reacts with an acid, they form respectively their conjugate acid and conjugate base
Considering any acid HA, the equation can be written:
HA loses a proton to form its conjugate base A–. The base B receives the proton to form its conjugate acid HB+.
An interesting point of this theory is that the acidity of a compound depends on the reaction in which it intervenes. It allows to some compounds to be both an acid and a base. For example, H2O can donate or receive protons.
Such compound is called amphoteric. In water, there is thus both acid and base in water. However, when we drink or put a hand into water, we don’t feel those substances (note that the water that we usually use in our everyday tasks contains ions, modifying its taste and slightly its acidity). As explained earlier, an acid attacks deeply into materials. Bases on the other hand affect surfaces by removing their protons. So why is nothing happening? The reason why is because the three substance (H2O, H3O+ and OH–) are in equilibrium. The reaction just above goes in both directions, as it is shown by the double arrow. However, the reaction doesn’t go in both directions at the same speed. The equilibrium constant for the reaction going from left to right is Kw=10-14mol2l-2=[H3O+][OH–]. For the other direction, the constant is K=1/Kw=1014mol-2l2. That means that the equilibrium is highly oriented towards the left. It is not frequent that a molecule of H2O autoprotolyses and when it occurs, the inverse reaction is very fast.
From Kw, we can determine the concentration of protons (or H3O+) in water.
Because H3O+ and OH– are produced at the same rate, their concentration is equal: [H3O+]=[OH–]. Then
In pure water, the concentration of protons is thus 10-7M (M=mol/l) at any moment. If an acid is put into water, the amount of protons in solution will increase. Inversely, if a base is put in water, the amount of protons decreases. The acidity of a solution is thus measured by the concentration of protons in the solution. For more comfort, the scale, called potential Hydrogen or pH, is minus the logarithm of the concentration of protons
and goes from 0 to 14 in aqueous solutions. At pH=0, the concentration of protons in solution is 1M. At pH=14, barely all the protons are removed from the solution by the base. The pH is not infinite because there is always a few protons remaining in the solution, due to the equilibrium. pH=7 is the neutral pH and is the pH of pure water. Most of the living species are adapted to this neutral pH. Some others have adapted to basic or acidic conditions to avoid predation or concurrence for resources.
We can also talk about pOH for bases with pOH=-log [OH–]. pOH is however generally not used. For basic aqueous solution, it is easier to refer at pH=14-pOH=14+log[OH–].
Lewis acids and bases
The same year, Gilbert Newton Lewis proposed an alternative, and broader, definition for acids and bases: A Lewis base is defined as a compound that can donate an electron pair to a Lewis acid, a compound that can accept an electron pair. Considering the same notation as above,
The two dots in this notation represents the pair of electron that the Lewis base B and the conjugate base A– are carrying. The proton is a Lewis acid, accepting pairs of electrons. With such a definition, the acids are no more limited to substances carrying hydrogen atoms. For example BF3 is a Lewis acid as the bore can accept a pair of electron.
Dissociation of H and OH
It is to be noted that in some occasions, H is not dissociated as a proton. When the liaison between two atoms is broken, the pair of electrons remains with atom of greater electronegativity (χ–).
H-Cl: χ–Cl=3.16 χ–H=2.2
In this case, and as expected, the pair of electrons remains on the chloride atom because its electronegativity is larger than the one of the Hydrogen.
Na-H: χ–Na=0.93 χ–H=2.2
The Sodium hydride is one of the few exceptions where the Hydrogen atom takes the pair of electrons. Indeed, its electronegativity is very low in comparison with H. This molecule splits in Na+ and H–.
If we look now to the liaison OH:
This group, typical in the basic compounds, can break to free a proton. A molecule carrying a OH group may then be acidic or basic depending on the atom connected to the oxygen.
In NaOH for example, the electronegativity of Na (χ–Na=0.93) is smaller than the one of the hydrogen (χ–H=2.2), by far. As a result, it is the bond between Na and O that breaks. As the O is already negatively charged, the O-H bond won’t split to give O2- and H+.
On the contrary, in HClO (Cl-O-H), the electronegativity difference between Cl-O is larger than the one between O-H. As a result, the hypochlorous acid splits in ClO– and H+.
To resume, the acidity of a substance depends on the reaction in which it takes part, and the presence of a H or a OH group in the substance does not mean that it is an acid or a base, and vice versa the fact that a substance is acidic or basic does not mean that this substance carries a H or OH group.
Measure of pH
Different methods exist to measure or to give an idea of the pH of a solution.
When a few droplets of pH indicator are added to a solution, the pH indicator gives the solution a colour depending on the acidity of the solution. Into a given area of pH the solution will be of a certain colour while the colour is different into another area of pH. Those area are not specifically 0-7 and 7-14 and depend on the pH indicator used. The change of colour is due to interactions between protons and the molecules of the pH indicator.
For example, bromocresol green is yellow in its acidic form and blue in its basic form. There is a transition area of pH for the bromocresol green between the pH from 3,8 to 5,4 where its colour is green, the colour of “its neutral form” (in fact it is a mix of the acidic and the basic forms). The structure of bromocresol green is shown is the Figure 1. The colour of the solution does not vary sensibly in the same area of pH but only at the limits between two area. It is explained by the fact that only a few drops are enough to obtain a visible colour. Moreover, as there are interactions between the pH indicator and the protons, the pH of the solution is affected by the presence of the pH indicator.
Figure 1: Structure of the bromocresol green in its acidic (left) and basic (middle and right) form. There are two resonance structures of the basic compound. Resonance will be seen in further chapters (organic chemistry)
Note that at pH=5.5 for example, this indicator is in its basic form even if the solution is acidic. pH indicators are thus useful to have an idea of the acidity of the solution. However, a lot of pH indicators exist and are easy to use.
The pH paper is a paper containing several pH indicators.
Initially jellow, its colour varies with the pH of the solution, from deep red for acids to deep blue for bases. Usually one droplet of the solution is dropped on the pH paper, giving it its colour. One can next compare the colour of the pH paper with a scale on the box of the pH paper to determine the pH of the solution.
This device determines the concentration of protons in solution thanks to an electrode plunged in the solution. It is more accurate than the two other methods but may need calibration. Its functioning will be seen later.
The general definition of an acid is thus a compound releasing protons. However, all the acids don’t have the same strength or acidity. We can define two types of acids and bases: Strong acids and bases, and weak acids and bases. For more simplicity, we will focus on acids in this lesson but the principle is identical for bases.
Strong acids totally dissociate in solution. It means that any single molecule of acid put in water will free a proton and acidify the solution. For example, HCl is a strong acid.
If one mole of HCl is put in water, all the HCl dissociates and in solution we can only find H2O, one mole of Cl– and one mole of H3O+. For this kind of reaction, the arrow separating reactants and products is a simple arrow going from left to right as the reaction is only going in one way. The potential Hydrogen, or pH, can thus simply be found with the quantity of HCl put in solution. pH=-log [H3O+] and as the reaction is complete, the quantity of H3O+ in solution is equal to the quantity of HCl put in solution. The concentration of protons is thus equal to the concentration of HCl in solution before the reaction [H3O+]=[HCl]0.
For example, if 0.1 mole of HCl is put in water to obtain a total volume of 1l, [H3O+]=0.1mol/l and pH=1. In lab, in general the acid is already in solution with a large concentration (6M for example) and has to be diluted to the desired concentration for the experiment. Remember that precautions are to be taken when you manipulate acids and bases, especially with concentrated ones. Use a pear to pipette them and not your mouth. Another “holy” rule is that “One does not baptise an acid”, meaning that to dilute an acid, add the acid into water and not water into the acid. The reason is that the dilution of an acid is highly exothermic and droplets of acid may be ejected out of the recipient.
The effect of a dilution on the pH is simple. If a solution of pH=2 ([H3O+]=0.01mol/l) is diluted 10 times, the pH increases by one (as it is a logarithmic scale) and pH=3 ([H3O+]=0.001mol/l), etc. For bases, a dilution decreases the pH of the solution towards the neutrality (pH=7). It is indeed the concentration of OH– which is affected in this case. As pH=-log[H3O+]=14+log[OH–].
Furthermore, a large dilution of an acid won’t lead to a basic solution. The dilution by 100 of a solution of pH=6 does not give a solution of pH=8 but approximately pH=7. In this case the water is the mean species defining the pH. The concentration of the protons coming from the acid becomes negligible with regard to the concentration of protons freed by the water.
To be considered a strong acid, the dissociation constant of the acid has to be large enough to proton all the H2O molecules of the solution into H3O+. Formally, strong acids have a pKa<-1.74. Lets explain that. We have seen that water has a dissociation constant of Kw=10-14mol2l-2. The dissociation constant of an acid is noted Ka. The same way as pH is –log of the proton concentration, pKa=-log Ka. For example HBr has a pKa of -8.7. The limit of pKa<-1.74 is simply the concentration of the water:
In 1l of water there is 1kg of H2O. The molar mass of H2O being equal to 18.01528g, the concentration of pure water is [H2O]=55.5084. –log of this concentration is 1.74.
To resume, to be able to proton all the molecules of water of the solution, which is the condition to be considered a strong acid, the acid has to have a pKa<-1.74. Some acids widely used are usually considered as strong acids but don’t answer to this condition but are between 0>pKa>-1.74 because they fully dissolve in diluted solution. These are the almost strong acids.
Among strong acids, we can find hydrochloric acid (HCl, almost strong acid), sulphuric acid (H2SO4), nitric acid (HNO3, almost strong acid), hydroiodic acid (HI), percloric acid (HClO4), hydrobromic acid (HBr) and many other.
Examples of strong bases: Sodium hydroxide (NaOH), Potassium hydroxide (KOH), Calcium hydroxide (Ca(OH)2),…
The conjugate base of strong acids are very weak bases and are inert as a base. Indeed, the basicity of the conjugate base of an acid (and inversely) is related to the Ka of the acid. The relation is Ka.Kb=Kw=10-14. Imagine for a second that the conjugate base reacts with water. If we add the reactions of the acid and of its conjugate base, we obtain the autoprotolyse of water:
HCl has a Ka=103 and the Kb of Cl– is thus Kb=10-17.
The acids do not all completely dissolve in water. Acids with pKa>0 are considered as weak acids. Because all of the molecules of acid do not dissolve, there is an equilibrium between the dissociated and the undissociated forms of the acid.
The equilibrium is represented by the two arrows between reactants and products.
An example of weak acid is the acetic acid (CH3COOH).
In solution, there is thus a melange of those 4 molecules. The pH is still determined by the amount of protons in solution. How do we find this quantity in the case of weak acids?
The constant of reaction of this reaction is
Lets take a look at the concentration of the species before and after the reaction
One part of the initial concentration of the acid (Ca) has reacted. The quantity of protons and of the conjugate base (CH3COO–) produced after the reaction are equal.
Generally, Ca>>[H+] (be carefull with this approximation for diluted solutions), leading to the following relation.
The pH of the solution can thus be found from the initial concentration of the acid put in solution and from its Ka:
For bases, the relation is similar:
Those relation can also be written as
From those equations, one can directly see why this kind of acid is weak with regard to strong acids: To obtain an increase of the pH of 1, a strong acid is diluted by 10 while a weak acid has to be diluted by 100.
Contrarily to strong acids, the conjugate bases of weak acids are active as a base in the solution. For example, the conjugate base of a weak acid with Ka=10-4 has a Kb=10-10.
Factors influencing the acidity
Electronegativity: The electronegativity refers to the ability of atoms to keep its electrons and the electrons of the bonds he shares near its nucleus. For two bonded atoms of same electronegativity, electrons composing the bond are not static but they spend an equal time at each end of the bond (statistically). For two bonded atoms of different electronegativity, electrons spend more time in the vicinity of the atom of larger electronegativity. This generates a separation of charges, a dipole with a partial negative charge (noted δ–) on the electronegative element and a partial positive charge (noted δ+) on the electropositive element. In acids, H possesses a partial positive charge depending on the electronegativity of the atom he is bonded to.
This partial positive charge stimulates the dissociation of H and thus increases the acidity of the molecule. Taking a look to the Mendeleev table, moving from left to right across a row on the periodic table elements become more electronegative (excluding the noble gases), and the strength of the acid formed by the element and hydrogen atoms increases accordingly.
Electronegative elements that are not directly bonded to the hydrogen can also pull electrons from the hydrogen. The effect is way smaller but should not be neglected.
Radius: Larger atoms have their bonding electrons further from the nucleus than small atoms. Because of this distance, these electrons are less energetic: there is a smaller interaction with the nucleus and the charge of the nucleus is partially shielded by the electrons of the inner layers. As a consequence, the bond is more easily broken to release a proton if the atom wearing the Hydrogen is large. The sequence of acidity for the halogenous acids shows it clearly. Hydrofluorous acid HF is a weak acid (pKa=3.2) and is less acid than HCl, HBr or HI even if its electronegativity is larger than theirs because it is much smaller (by a factor 2 to 3). The bond between the fluorine and the hydrogen is thus stronger because electrons are close to the nucleus. HI is the largest of the sequence and is also the most acidic halogenous acid with a pKa of -9.3>pKaHBr (-8.7)>pKaHCl (-6.3))>> pKaHF (3.2). Moving down a column on the Mendeleev table, the size of the elements increases and become less electronegative. The size effect tends to dominate the variation of electronegativity and the acidity increases of compounds wearing Hydrogens atoms.
Earlier, we mentioned the sulphuric acid, H2SO4. This acid has two protons available.
H2SO4 is a strong acid (pKa1=-3). When sulphuric acid is put in solution, a first proton is freed and there should be no remaining H2SO4 in solution. On the other hand, HSO4– is a weak acid (pKa2=1.9) and it will not totally dissociate in water. To determine the pH, we can proceed as we did for the weak acid:
With simple math, one can see that [HSO4–]= 2Ca- [H3O+] and we can write
From that point, we obtain a second degree equation
That has now to be solved to obtain
The pH can still be calculated from the constant of dissociation and from the initial concentration of acid put in solution even if the solution is a bit more difficult.
Amphoteric species are species who show both acidic and basic characteristics. HCO3– is an example of amphoteric species. As for the sulphuric acid, H2CO3 is a polyacid. However the carbonic acid is a weak acid and there is thus an equilibrium involving HCO3– as the conjugate base of H2CO3.
Water is also an amphoteric species as it can free or accept a proton.
There is a particular pH at which the amphoteric species has the same effect as a base than as an acid. This pH is called the isoelectric pH or pI. For H2O, the isoelectric pH is 7 but it can be determined from the value of Ka:
Considering that [H2CO3]=[ CO32-] at this pH,
Amino acid is another amphoteric species. Such kind of molecule wears an acid group and a basic group. Fig.1 shows the structure of an amino acid in its acidic (left), neutral (middle) and basic form (right). The acid group of the amino acid is the COOH group. Its hydrogen atom can be released to obtain a negative charge on the oxygen. This charge is stabilised by resonance by the COO– group. The amine group on the left of the molecule has the role of the base. The azote possess a pair of electrons available to accept a proton.
This amphoteric property of the amino acid is used experimentally to separate amino acids between them. Because of their particular structure (R varies depending on the amino acid), each amino acid has a different isoelectric pH. The molecules are placed on a gel containing a gradient of pH, inside an electric field. As long as the amino acid is not in its neutral form, it is attracted by an electrode placed at an extremity of the gel. Each amino acid will thus stop to move at a different place on the gel and can be separated. For example Alanine (R=CH3) has a pI=6 while pI=5.48 for the phenylalanine (R=CH2C6H5).
You can find here a few exercices to apply the theory explained in this section and eventually sections related to its subject. Most of the questions should be simple to answer but some may require a calculator, or be thougher. Answers are given underneath.
- What is the pH of a solution of HCl 0.5M?
- How do I procede experimentally to obtain 100ml of HCl 0.05M?
- What is the pH of this solution?
- If I put a droplet of this solution on a bit of pH paper, what color does the paper take?
- Is HClO (Cl-O-H) an acid or a base?
- What is the pH of a solution of HClO 0.025M (pKa=7.497)?
- If this solution is diluted by 10, what is its pH? and if diluted by another 10?
- What is the pH of a solution of NaOH 0.01M? of NH3 0.01M (Kb=1.8×10−5)? At the equilibrium, how much NH3 remains in solution?
- pH=0.3: HCl is a strong acid and completely dissociates in solution. pH=-log[H3O+]=-log[HCl]=0.3
- Even if the acid is not very concentrated in this case, it still has a high pH and precautions have to be taken to manipulate such a solution. Here, we just need to do a dilution of the acid solution by 10. To obtain 100ml of diluted solution, we will need a pipette of 10ml and its pear, and a volumetric flask of 100ml. One does not baptise an acid. The flask is thus first filled with a volume water (50ml for example) and 10ml of the acid solution is added using the pipette and its pear. Avoid to use your mouth. Mix the solution and add water to the graduation. Mix one more time.
- pH=1.3: As the initial solution has been diluted by 10, pH increases by 1. No need to calculate here.
- Deep red
- HClO is an acid: it was fully explained in the previous section: in HClO (Cl-O-H), the electronegativity difference between Cl-O is larger than the one between O-H. The hypochlorous acid splits in ClO– and H+. χ–Cl=3.16, χ–H=2.2, χ–O=3.44
- pH=4.55. The hypochlorous acid is a weak acid. The pH formula is then
- pH=5.05 for a dilution by 10 and 5.55 for a dilution by 100.
- NaOH: pH=12, NH3: pH=10.63, [NH3]=0.009576M. We can here simply use the formula of weak bases for NH3 (remember that the pKa+pKb=pKe) but we will need the details for the third part of the question, so lets develop the problem:
Considering that Cb>>[OH–], and we will see that this approximation is correct, we find
So from the 0.01M of NH3 that were put in solution, only ~4% dissociate. The remaining concentration of NH3 is [NH3]=0.009576M. pH is found using pH=14+pOH=10.63.
A neutralization reaction is the reaction occurring between an acid and a base forming a salt and water.
Technically, the neutralisation is not a one step reaction in the sense that all the actions are not done simultaneously but step by step. The fist step is the dissociation of the acid and base from their conjugate species. The second step is the formation of the salt from the conjugate species and of water from H3O+ and OH–. For example, NaOH is neutralised by HCl to form NaCl (cooking salt) and water.
The first equation is indeed the addition of the 4 equations underneath. If a compound is found at both sides of the equation, as the ionic species, we don’t write them in the mean equation. Note that all the molecules of Na+ and Cl–
are not reacting to give NaCl. There is here an equilibrium between the species in solution and the salt precipitating. This equilibrium is to be seen in the section of Dissolution.
The neutralization point is reached when the quantity of acid and base put in solution are equal. All the reactants are then consumed and for this reaction the pH is neutral, i.e. pH=7.
Titration of strong acids/bases
Titration is a method used to determine the concentration of a compound through its neutralisation. For example, the concentration of a solution of HCl can be determined by the addition of a solution of NaOH. The concentration of the titrating solution is known. As we are in presence of a strong acid (HCl) and a strong base (NaOH), both of them completely dissociate in solution. The concentration of proton is initially equivalent to the concentration of Cl–. In the other solution, the concentration of OH– and of Na+ is equivalent of the known concentration of NaOH.
To obtain a neutral pH, the number of protons na (in moles) has to be equal to the number of OH–, nb (in moles). In other words, neutralisation, or equivalence, is reached when
The number of moles of a compound in a solution is simply the concentration of this species multiplied by the volume of the solution:
From the two previous relations, we can find the initial concentration of acid that we wanted to determine:
As an example, if 20ml of NaOH 0.01M was required to neutralise a volume of 10ml of the HCl solution, [HCl]=0.02M.
In laboratory, titrations are performed as follow:
A flask containing a given volume of the unknown solution is placed on a magnetic mixer. The magnetic chip is placed in the solution to mix it continuously during the experiment. To be able to observe the neutralisation process, two droplets of pH indicator are added to the solution. Several indicators can be used to observe the passage through pH=7. Here, we will use bromothymol blue the color of which changes from yellow (pH<6.0) to blue (pH>7.6). At pH=7, the color is green. Considering the previous example, our solution is thus yellow. We can already say that the solution is acidic and that its pH is smaller than 6.
The neutralising solution, NaOH 0.01M in our case, fills a burette placed a few centimetres above the other solution. There is no need to add color indicator in this solution. Manipulating carefully the burette, NaOH is slowly added to the acid solution. One can read the consumed volume of base on graduations of the burette. When pH approaches 6, one can see the base droplets turning blue while mixing in the solution. Not knowing at all the concentration of the acid, it is convenient to perform a fast experiment to determine an approximate volume for the neutralisation and to perform a second experiment, only going slowly at the approach of this volume. Typically, the color will change from yellow to green or directly to blue at the fall of one droplet. The precision of this experiment is thus limited to the precision of the burette. Generally, the volume of a droplet is the half of a graduation.
The titration curve of this example is shown next.
Remember that the pH is the logarithm of the concentration of protons while the addition of the base is linear, and that the volume of solution increases with the addition of the base (don’t forget that point when choosing the volume of the flask). As we can see, the pH variation is mainly focused at the vicinity of the neutralisation point. At 20ml, pH=7 but at 19.95ml (one less droplet approximately, for a 50ml burette), pH=4.78 and at 20.05ml, pH=9.22.
We can also find the equivalence point by tracing the tangents of the curve in the acid region and in the base region. Those two tangents are parallel. When we trace the line equidistant to those two line, it meets the equivalence point.
Titration of weak acids/bases
The titration of a weak acid is done using a strong base and follows the same principle. Lets perform the titration of the acetic acid. The reaction is
This reaction is complete. Before the titration, the pH of the acetic acid solution is simply given by the relation for a weak acid saw in the previous section
As for the strong acids, the equivalence is reached when
However, the pH at the equivalence is not neutral but basic. Indeed, the conjugate species of a strong base/acid is inert but the conjugate species of a weak acid/base is itself a weak base/acid: pKa + pKb = 14. All the acetic acid and NaOH are consumed but acetate has been produced and it is a weak base.
Before the equivalence, the pH depends on the quantity of weak acid and of its conjugate base:
This melange between a weak acid and its conjugate base is called a buffer solution because the addition of a strong base or acid does not modify sensibly the pH of the solution. Buffer solutions are very important for living species to resist to sudden variations of the environment. An example of buffer solution is our stomach. No matter what we eat or drink, its pH is unaffected (approximatively) so that the job can be done. In our whole body, enzymes are effective in a given area of pH and to keep them working, the pH has to be regulated through buffer solutions.
The semi-equivalence is the point when there is as much CH3COOH that CH3COO–(Ca=Cb). At this point, pH=pKa. To reach the semi-equivalence point, the added volume of base is the half of the volume to obtain the equivalence.
At the equivalence, pH is given by the amount of acetate in the solution (formula for a weak base). This amount is equal to the one of NaOH added to the solution.
After the equivalence point, pH is given by the quantity of NaOH in the solution.
Titration of a polyacid
Considering a polyacid HnA (a concrete example will be given later) with different enough pKa’s, the neutralizations of the different forms of the acid are successive: The OH– will first neutralize the protons released by HnA and then the protons released by Hn-1A–, etc.
The initial pH of the solution is the pH of HnA. The concentrations of the subsequent acids are negligible. HnA may be a strong acid or a weak acid.
At the first equivalence point, Hn-1A– is the main species in solution. It is an amphoteric species, i.e. it can accept or donate protons. The pH is thus
At the next semi-equivalence point, [Hn-1A–]=[ Hn-2A2-] and we are in a buffer solution. Remember that the pH in buffer solutions is
The pH is thus pH=pKa2. Note that if the initial acid is a weak acid, the same is true for the first semi-equivalence point, i.e. pH=pKa1.
It is interesting to note that for those specific values, pH does not depend on concentrations.
Let’s take the case of H3PO4 as an example. Its pKa’s are very different from each other
The neutralisations are successive and we can thus find the specific points (semi-equivalences and equivalences) determined above.
H3PO4 is a weak acid. The initial pH of the solution is thus
Before the equivalence, H3PO4 and H2PO4– are in solution. This buffer solution has a pH of
With pH=pKa1 when [H3PO4]=[H2PO4–], at the semi-equivalence.
At the first equivalence, H2PO4– is the main species in solution. It is an amphoteric species and the pH is thus
After the first equivalence, H2PO4– and HPO42- are in solution. This is again a buffer solution.
With pH=pKa2 when [H2PO4–]=[HPO42-], i.e. at the second semi-equivalence. At the second equivalence point, HPO42- is the main species in solution and is amphoteric.
After this equivalence, the solution is again a buffer solution.
With pH=pKa3 when [HPO42-]=[PO43-], i.e. at the third semi-equivalence.
At the third equivalence, PO43- is the main species in solution. This is not an amphoteric species but a weak base. The pH should be
However, Ka3 is very close to Kw. Protons released by water are in competition with protons from HPO42- and the prediction does not stand anymore. To calculate the pH, we need to go back to the full composition of the solution and then solve the equation. This will not be done here.
- What is the color of bromothymol blue in a 20ml solution of NaOH 0.005M? If we add 10ml, 20ml or 30ml of HCl 0.005M?
- What is the color of bromothymol blue in a 20ml solution of H3PO4 0.1M? If we add 10ml, 20ml or 30ml of NaOH 0.1M?
- 0ml: blue (pH=11.7), 10ml: blue (pH=11.22), 20ml: green (pH=7), 30ml: jellow (pH=3).
- 0ml: jellow (pH=1.57), 10ml: jellow (pH=pKa1=2.147), 20ml: jellow (pH=4.67), 30ml: green (pH=pKa2=7.2).
Organic chemistry is the chemistry of carbon and its compounds. Carbon is one element of the Mendeleev table among many others, so why is there a complete section of chemistry related to this particular element? Carbon has a valence of 4 and can thus bind with up to 4 other elements of the periodic table. To that point there is nothing extraordinary. However, where in the inorganic chemistry atoms can only bind together to form small molecules, carbon based molecules can form long and stable chains with a rich variety of functional groups.
Organic chemistry is so called because carbon is the essential constituent of living species: proteins, ADN, lipids, sugars or fats are a few example of organic compounds, made of a structure of carbon wearing functional groups allowing them to interact together, becoming more than just their simple addition but to form a functionning macrosystem where each molecule has a specific role to sustain a stable living body.
Organic chemistry is thus very important in the science of the living, but is not limited to that. Plastic is an organic compound that we find everywhere, oil, toothpaste, shampoos, clothes, deodorants, etc. are products of organic chemistry.
To adventure ourselves in this vast world that is organic chemistry, we will first discuss over alkanes, their structures and how scientists call them, and how they are represented. Later we will introduce the different functional groups that we can find on organic compounds and finally how organic compounds react, how we can produce them or modify them.
Alkanes are compounds only composed of carbon and hydrogen atoms. Hydrogen has a valence of 1, meaning that it can only make one liaison with another atom. A single carbon will thus bind with 4 hydrogen to form a neutral species CH4. This molecule is called methane and is a gas in normal conditions. The liaisons are covalent liaisons. The carbon is slightly more electronegative than the hydrogen but it is not important at this moment of the lesson. Just remember that CH4 will not dissociate from a proton and is not an acid.
There are several ways to represent this molecule. The fully developed representation is as follows:
In this representation of the methane, all the liaisons are shown by a full line connecting the atoms. All the atoms are shown independently as well. This representation is in two dimensions and the hydrogen atoms in 3D are in reality not on a single plane. The structure of lowest energy is the structure where the hydrogen’s are the most separated from each other. Indeed, hydrogen’ss have a given volume and are repelling each other. To obtain that structure, an angle of 109.5° separates the liaisons. It leads to a tetrahedric structure.
Most of the time, there is no point in showing the complicated structure of large molecules. It would only make it harder to see the important information of the molecule. However, it is sometime important to know in which direction goes one particular liaison. In this case, the lines representing the liaisons take different forms depending on their orientation. In the plane of the page, liaisons are still represented by a simple line. Two other cases are possible: Liaisons can go in the direction of the reader or in the opposite direction. Liaisons going toward the reader are represented by a black triangle one peak of which is connected to the atom on the plane of the page and the two other points are connected to the atom out of the plane. This way, the triangle looks like a line going wider from the atom in the plane to the atom closer to the reader.
For liaisons going in the opposite direction, atoms are connected by traits (parallel or perpendicular to the liaison direction your choice). For methane, it gives the following 3D representation:
The alkane possessing a structure of 2 carbon atoms is the ethane. It is a gas as the methane. To form the “bone” structure of the ethane, the two carbon atoms bind together through a covalent liaison. Obviously they have the same electronegativity. From their 4 electrons, one is thus used by each carbon to bind with the other one. 6 hydrogen will thus complete the structure. As in inorganic chemistry, the octet rule is respected: to be stable, a carbon has to have 8 electrons (an octet) around it: its 4 electrons and 4 electrons of the other atoms with which it shares a bond. Each carbon of the ethane is thus bound with a carbon and with 3 hydrogens. It is the single possible structure for such compound. In no way one carbon would wear 5 hydrogens and the other carbon 1 or more.
The fully represented structure of the ethane is thus
If we want to represent it in 3D, it would be:
However, it is to be said that atoms can rotate in the axe of a simple liaison. The 6 hydrogen are thus turning in circle around the axis made by the two carbons, as shown above. The Hydrogen’s are rotating almost freely around this axis.
Each hydrogen has a given volume and feels the atoms in its vicinity (steric hindrance). When rotating, the distance between hydrogen’s on a same carbon is constant. However, the distance with the closest hydrogen carried by the other carbon changes.
The relative positions of substituents can be showed through the Newman projection. The molecule is observed along its C-C axis. The first C (proximal carbon) is represented by a circle from the centre of which three lines are going out. These lines are the liaisons of this carbon. The second carbon (distal carbon) is hidden by the first one but one part of the liaison is visible.
Fixing the hydrogen’s of the proximal carbon, only the hydrogen’s of the distal carbon can move. Two cases can be observed:
- The hydrogen’s of the distal and of the proximal carbons are on the same spots, or so called eclipsed
- The hydrogen’s are not on the same spots, or so called staggered
A maximum of energy is reached in the eclipse conformation because repulsion between the hydrogen’s is maximal in this conformation. A rotation of 60° from this conformation leads to a minimum of energy, the hydrogen’s being as far away from each other as they can be. A molecule that has to maintain hydrogen’s (or substituents) eclipsed has a higher energy than a molecule of same composition with staggered hydrogen’s. The difference in energy here is not very important and the rotation is effective. During the rotation, the molecule passes more time in the staggered conformation (smaller energy). If substituents were on the molecule, the steric hindrance increases with the radius of the substituent. In some cases, the rotation can be blocked by the presence of voluminous substituents.
The semi representation of the ethane is
In this representation, the carbons are regrouped with the atoms they are wearing but not sharing. The liaisons between C and H are thus not showed in this representation. If a Hydrogen atom was replaced by a chlorine atom, for example, the half representation would be:
Adding a third carbon atom to the chain, C3H8, the propane, is still a linear alkane. The structure in triangle where each carbon binds with two other carbon exists but is not very stable. You maybe have spotted that the formula of linear alkanes has a general model: CnH2n+2. For each atom of carbon added to the first one, 2 hydrogen are to be added.
There are two ways now to add a fourth carbon to obtain a butane molecule. The chain can be extended by its extremities or by its middle. When the chain is linear, we add n- before the name of the compound. n-butane is thus
If the chain is extended by its middle, we name this compound isobutane
The iso prefix is used only for a few compound wherein a carbon wear two terminal CH3. n-butane and isobutane share the same formula C4H10 but don’t have the same structure. Such kind of compound is called an isomer of constitution. The greater the number of carbon in an alkane, the greater the number of isomers.
For a 5 carbon chain, the fifth carbon can be added at one extremity of the n-butane to obtain n-pentane, equivalently at any extremity of the isobutane (it is the same as adding the C on one CH2 of the n-butane) to obtain isopentane, or on the CH to obtain neopentane.
Names of the alkanes:
Another representation of organic molecules is the skeleton representation. In this representation, carbons and hydrogens are not shown. The liaisons between carbons are still shown as full lines and are connected together with an angle at the position of the carbon atoms. Without angle, we could not differentiate a chain of 6 carbons from a chain of 7. Generally, the angle is approximately 120° so that if a carbon is bound to three other species (other than H), each liaison is equally distant. If a carbon is bound to 4 species, the angle is 90°.
For example, the pentanes showed above are represented
This bone structure is the representation that is usually used. Only important informations are shown. The number of hydrogen on the different carbons of the bone are determined by the number of liaisons that the carbon has. It is thus pointless to show it. Moreover, this representation is faster to write and takes less space.
There is a given method to name organic compounds. An alkane as a functional group has the same name except that the -ane is replaced by -yl. For example, isobutane is also called methylpropane because a methyl is fixed at a linear chain of 3 carbons, i.e. a propane chain.
C4H9Cl is a Chlorobutane. With this name, we know the components of the compound but not its complete structure. The connectivity in the butane and between the butane and the chlorine are not known.
The rules to name a compound are
- the longest chain is the main one. However if a functional group is on one chain, the main chain has to wear it.
- tag a number to each of the carbon from one side of the main chain to the other. The carbon wearing a functional group which is the closest of an extremity has to have the smallest number.
- Next, we name the compound by writing first the groups out of the main chain, with their number as prefix, in the alphabetical order, followed by the main chain with its group.
- our examples are thus named
- If several identical functional groups are on different carbons, the prefix are separated by a , and their number is indicated by bi, tri,…
- ex: the isooctane is the 2,2,4-trimethylpentane, meaning that a total of 3 methyl groups are on a main chain of 5 carbons. Two methyls are on the carbon #2 and one on the #4. It is 2,2,4-trimethylpentane and not 2,4,4-trimethylpentane because we favor the carbon wearing more groups.
Simply said, halogenoalkanes are alkanes wearing one or more halogen. It is simply said but halogenoalcanes are not so easily made. They are made from a dihalogen and an alkane through a radical reaction during which a proton has to be removed from the alkane. This step of the reaction is not favorable but can be made through heavy heating (300° for the chloromethane). Also, the position of the halogen is not completely fixed. The proton removed during the reaction is easier removed from a carbon in the chain than at an extremity but the high temperature makes both positions possible (the distribution depends on the temperature)
Halogens have a higher electronegativity than C and they generate a dipole from C to X. A carbon wearing an halogen is thus poor in electrons and will consequently be targeted by reactants rich in electrons. The reactivity of halogenoalkanes will be seen in a further section.
We have already see that for one given formula, several different molecules may exist. When the connectivity differs, these are isomers of constitution.
Ex: butane and methylpropane, ethanol and methoxymethane.
Even on a single carbon, the connectivity may change. Stereoisomers are isomers of same connectivity but with different spatial positioning. The bromochlorofluoromethane has two stereoisomers forms.
These two molecules are mirror images one of the other. It is said that they are chiral if the molecule and its mirror image do not superimpose. These particular stereoisomers are called enantiomers.
A good way to explain chirality is to look at our hands. The left hand is the mirror image of the right hand (and vice versa). However, we cannot superimpose them.
Chirality is related to the carbon that wears several different groups. It is a stereocentre. Stereocentres are often indicated by an asterisc. If a plane of symetry exists for the molecule, this molecule is achiral (><chiral) and this molecule and its mirror image can superimpose somehow. For example, Bromofluoromethan is achiral because a plane of symetry can be drawn, passing by the two hydrogen atoms.
Our body is able to distinguish enantiomers from each other. For some medicines, one enantiomer is active while the other one will do absolutely nothing, or will be less effective. In some cases, it is thus very important to be able to produce selectively one enantiomer and not the other one. Pharmaceutical indrustries invested such methods. If the reaction is not enantioselective, the productivity immediately drops by 50%. The optic activity of enantiomers also differs and is a good way to know which enantiomer have been produced.
The optical activity of a compound is its influence on a plane polarised light beam. When the light, filtered only to oscillate in one plane, passes through a sample of an optically active compound, the beam is rotated by a given angle.
The angle of rotation depends on the molecules in the sample, of their concentrations and on the length of the sample cell. Each of these effects are linear and the modification of the angle of the light is given by the formula
The interest here is that enantiomers don’t have the same optical activity. The absolute modification of the angle is identical, but the direction in which the light is rotated is not. One enantiomer deviates the light towards the right and the other enantiomer deviates the light by the exact same angle but towards the left. The enantiomers are respectively defined as dextrogyre and levogyre and noted with a (+) or a (-).
The optical activity of an enantiomer is fixed for this molecule, at a given temperature t and for a light of given wavelength λ. Knowing its value, it is possible to determine the quantity of each enantiomer in a racemic mix/melange. A racemic melange is a solution containing the two enantiomers, not necessarily in the same quantity (nb: a more global definition of a racemic melange is the melange of several possible products of a single reaction). In such a melange, all the species will deviate the light with their normal effect.
If the two enantiomers are in equal quantities in the solution, the sample will be optically inactive, the effect on one enantiomer being counterbalanced by the effect of its specular image (i.e. the other enantiomer). If the quantities are not equal, the sample is optically active and the relative quantities of the enantiomers can be determined. The enantiomeric excess is the difference of proportion of the two enantiomers and is practically the proportion of the enantiomers that have an effect on the light. For example if the ratio between the enantiomers was 3:1 (75% of one (let’s say the dextrogyre) and 25% of the other enantiomer), the enantiomeric excess is 50%. Indeed, from the 75% of the dextrogyre enantiomer, the effect of 25% is counterbalanced by the levogyre enantiomer present in the solution. Only 50% of the (+) enantiomer deviates the light beam. If the pure enantiomer would have rotate the light plane by 26°, an entantiomeric excess of 50% rotates the light by 13° (50% of 26°).
Name of the enantiomers
We need a way to name differently the enantiomers. R or C will precede the name of the molecule and we will now see how to attribute which letter to which enantiomer. It is unfortunate but there is no clear correlation between the optic activity of an enantiomer and its structure. Another method had to be find.
The first step is to give a priority to each group attached to a stereocentre.
Priority is given in regard with the mass of the atom directly bound to the stereocentre. Let’s name these groups a,b,c and d by decreased priority (A priors to B, B priors to C, C priors to D). If two atoms have the same weight (two carbon are bound to the stereocentre for example), we look at the atoms they are wearing and, again, the priority goes to the carbon wearing the atom of larger atomic weight. If a methyl and an ethyl were bound to the stereocentre, the ethyl has the priority. Both groups are bound to the stereocentre by a carbon atom. We look now at the atoms on these carbons. The methyl has 3H and ethyl has 2H and one C. As C is heavier than H, the ethyl has the priority on the methyl.
Remember that it is the weight of the bound atom that matters, not the weight of the complete group. A –OH groups has indeed the priority on an ethyl group because O (atomic weight=16) is larger than C (12) even if the groups weight respectively 17 and 29 units of atomic mass. Note that we are talking about the mass of the elements. Isotopes can thus create stereocentres in molecules.
Next, we look at the molecule as if the group with the lowest priority (D) was behind the carbon. Generally the group with the smallest priority is a Hydrogen atom. This atom (or group) is not represented in the rest of the method.
So, looking at the stereocentre this way, we only see 3 liaisons connecting the stereocentre with the three groups of highest priority (A, B, C).
Now, we want to determine in which sense to rotate to go from the highest priority (A) to the lowest (C), passing by B. It may be helpful to place A in top of the representation.
If we must go clockwise, it is the enantiomer R. If it is counterclockwise, we are in presence of the enantiomer S.
A second method exist, giving the same results, using the Fisher projection.
Instead of placing a group behind the stereocentre, we put two groups horizontally and two group vertically, still by a rotation of the stereocentre. The horizontally oriented groups are pointing toward the reader. The other two groups that are on the vertical axis are pointing opposite to the reader.
The rotation of the stereocentre is done by “grabbing” a pair or substituents and placing them in front of the molecule. Be careful to place the horizontal groups in the direction of the reader and the vertical ones in the opposite direction. Otherwise, a R enantiomer becomes S and vice versa.
Once the rotation is done, to determine the correct configuration, the group of lowest priority has to be placed on the 12 o’clock position of the Fisher representation. Then, the configuration is determined the same way as for the Newman projection. If the lowest priority group was not in the top position after the rotation, don’t worry, we can perform permutations between near substituents. Performing one permutation changes the conformation of the enantiomer from R to S and vice versa. Performing two let the enantiomer identical.
If an even number of permutations (including zero) were done to put the lowest priority group in the 12 o’clock position, we can determine directly the configuration of the enantiomer. If an odd number of permutation was done, then you have two choices: either you determine the present configuration, knowing that the correct configuration is the other one, or you do one additional permutation and then determine the configuration.
Several stereocentres may be present on a single molecule. The configuration of each stereocentre (R or S) is determined independently. If two stereocentres are on one molecule, several configurations are possible: RR, SS, RS and SR.
For example, 2-Bromo-3-chlorobutane has 4 stereoisomers.
All those conformations are not mirror image of each other. Some stereoisomers are enantiomers but some are not. Stereoisomers that are not the mirror image of each other are called diastereoisomer.
- Draw the skeleton structures of the isomers for C8H18. How many isomers did you find?
- Name the following molecules:
3. Draw the following molecule:
- Is this name correct? If not, correct it.
- Draw the Newman projections of the C-C liaisons of this molecule and write if they are eclipsed or staggered
6. Chiral or achiral? If chiral, indicate if they are R or S.
1. There are 23 isomers of constitution for C8H18. A good way to find them all is to start from the longest main chain and decrease its length step by step.
2.2 Neopentane or 2,2-dimethylpropane
2.3 4-ethyl-7-methyldecane. It is not 4-methyl-7-ethyldecane because substituents are placed in alphabetical order if at the same distance from one extremity of the chain
2.4 6-propyl-3-methyldecane (main chain from top to bottom right)
3. Names and sketch:
4. Correct or not?
- 2,5-dimethyl-4,6-dipropylnonane: correct
- 3-ethyl-7-methyloctane: incorrect: the methyl substituent is closer of an extremity than the ethyl. The correct name is 6-ethyl-2-methyloctane
- 2,5-dimethyl-4-ethyldecane: incorrect: substituents have to be named in the alphabetical order. The correct name is 4-ethyl-2,5-dimethyldecane
- 4-(1-methylethyl)-5-propyldecane: correct
5. Newman projections going from left to right
6.1 Chiral: R
6.3 Chiral: S
6.5 Chiral: S
A cycloalkane is, as it name shows, a cyclic alkane chain. Each carbon of the chain is bound to (at least) two carbons and two hydrogen’s. The general formula is thus CnH2n and the name of the compound is the same name as the corresponding alkane with the prefix cyclo.
The smallest cycle, cyclopropane, is made of 3 carbons. Each carbon is bound to the two others with a triangle shape.
That means that the carbons are in the same plane and that the angle between liaisons is 60°. This angle is far from the normal angle between liaisons in alkanes. Remember that carbons have a tetrahedral structure with an angle of 109.5° between each liaison. To be cyclic, there is a deformation of the carbon structure and a tension of cycle is maintained. It is possible to determine the importance of this tension from the energy of combustion ΔH°comb of the cycloalkane.
For a linear alkane, the heat of combustion increases by approximately 658.5kJ/mol when the length of the chain is increases by one unit. One can conclude that the average ΔH°comb of a CH2 is 658.5kJ/mol. Applying this to the cyclopropane, C3H6, the calculated ΔH°comb is -1975.5kJ/mol. However, when we experimentally perform the combustion, we find that ΔH°comb =-2091.2kJ/mol.
The cyclopropane releases thus more heat than what we could expect. The difference, 115.7kJ/mol (38.6kJ/mol/CH2), comes from the cycle tension, i.e. the molecule requires more energy just to bind this way. In fact, the orbitals of the carbon are not well aligned but the angle between orbitals is 104°.
As a result, the liaisons are weak and cyclopropane is not very stable. It is indeed easily open through catalytic hydrogenation.
Finally, the position of the hydrogen atoms is unfavourable. Let’s have a quick reminder at the eclipsed and staggered conformations for the hydrogen’s in ethane (C2H6). Hydrogen’s can rotate around the axe formed by the liaison C-C. Each hydrogen has a given volume and feels the atoms in its vicinity (steric hindrance). When rotating, for a given H the distance with the closest hydrogen carried by the other carbon changes.
On the Newman projection,
- the hydrogen’s are on the same spots, or so called eclipsed
- the hydrogen’s are not on the same spots, or so called staggered
A maximum of energy is reached in the eclipse conformation because repulsion between the hydrogen’s is maximal in this conformation. Cyclopropane has to maintain its hydrogen’s (or its substituents) eclipsed and it requires a lot of energy to the molecule. In the cyclopropane, all the hydrogen’s are eclipsed. The difference of energy between the eclipsed and the staggered conformations can be significant as we will see for cycles of more than 3 carbons.
Performing the combustion experiment on cyclobutane, for which the angles are of ~90°, we find an excess of 110.3kJ/mol (27.6kJ/mol/CH2) due to the cycle tension. It is less than for the cyclopropane because the forced bending is smaller in the cyclobutane than in the cyclopropane. The tension for these two cycles are very important. For larger cycles the tension decreases significantly and is at its minimum for a cycle of 6 carbons, the cyclohexane.
Cycles with more than 3 carbons are not plane. In the cyclobutane, the angle between the fourth carbon of the molecule, out of the plane, and the plane formed by the three other carbons is 26°.
The angle between the carbons is 88.5°. It is slightly less than for a square (90°) in one single plane. So why is the cyclobutane not plane? It should indeed decrease the cycle tension. However, in the plane conformation the 8 Hydrogen’s would be eclipsed. The 26° shaping of the cyclobutane moves the Hydrogen’s out of the eclipsed conformation. The small angle difference is thus counterbalanced by the improvement of the positioning of the Hydrogen’s.
The structure of the cyclobutane is oscillating quickly between two conformers: the carbon out of the plane moves from one side of the plane to the other. These two conformations are equivalent in energy. Looking at the next figure, 4 H are represented.
Two of them (going upwards) are pretty close from each other while the two other are distant. After the oscillation, the roles are inverted and the two Hydrogen’s have in average the same steric hindrance, also called transannular tension in this case.
Cycles can thus oscillate between several conformations when the conformations have a similar potential energy and if the energetic barrier between to switch of conformation is small enough. In the case of the cyclobutane the hindrance is very small, but if one of those 4 hydrogen’s was a substituent, the two conformers are no more equivalent. Indeed, the molecule will place itself in the most favorable conformation which is when the voluminous substituent is not affected by the transannular tension. The proportion of the conformers is no more 50:50.
The disfavour of the Hydrogen’s in ecliptic conformations is clear in the pentane. In a regular pentagon, the angle is 108°. It is almost the normal angle for a tetrahedral carbon (109.5°). However, 10 Hydrogen’s would be eclipsing one each other. The cyclopentane, and in fact any cycloalkane except the cyclopropane, is not plane. Two conformations (each with two conformers) are possible: the envelope and the semi-chair.
In the envelop conformation, 4 carbons are in the same plane, with an angle of 104.4°. In the other conformation (semi-chair), the angles are smaller but the eclipsing effect is smaller as well. The two conformations have very near potential energies and the barriers to switch between two forms are easily passed through. Cyclopentane oscillates then quickly between its conformers.
The case of the cyclohexane is particular. When we look at the heat of combustion of this species, the experimental value is less than one (0.8) kJ/mol different from the calculated one based on the number of CH2 in the molecule. Cyclohexane is the most stable existing cycloalkane.
Two conformations exist but one is more stable than the other one.
The most stable conformation is the chair conformation.
In this conformation, 2 pairs of carbons are in the same plane and the last 2 carbons are in each side of the plane. This position is called chair: the 4 carbons on the plane make the seat, one plane of 3 carbon makes the back of the chair and the other makes the footrest.
The angle between carbons is 111.4°, i.e. almost the 109.5° of a normal tetrahedral carbon, and all the Hydrogen atoms are in a staggered conformation.
This structure is thus very stable. Two types of Hydrogen’s can be distinguished: the ones in the axial position and the ones in the equatorial position. 6 liaisons C-H are parallel to the axe of the molecule (axe passing in the middle of the molecule). These are the axial Hydrogen’s. The other 6 liaisons are almost perpendicular and are called equatorial.
In we reverse the chair structure (chair’s back<->footrest), equatorial hydrogen’s become the axial ones and vice versa.
To reverse the chair, cyclohexane has to go through the boat conformation, less stable by 28.9kJ/mol (the energetic barrier is 45.2kJ/mol). In this conformation, the two carbons that were out of the plane are now at the same side of it. This does not only generate a steric hindrance, but the hydrogen’s on the 4 carbons of the plane are now eclipsing each other. That explain the difference in potential energy between the boat and the chair formations.
In reality, this conformation is only a transition state. A more stable form is the crossed-boat conformation, almost identical but reducing the transannular tension. The boat conformation is thus the transition state between the two crossed-boat conformations.
We can resume the conformations as follow:
Cyclohexane in the boat conformations only exist in very small proportions with regard to the chair conformation. We will thus only focus on the chair conformation for our further analyses.
Presence of substituents on the cyclohexane
The positions of a substituent on the cyclohexane are not equivalent.
If a substituent is placed on an axial position, the steric hindrance is greater than on an equatorial position. Indeed, equatorial substituents are more spaced than axial ones which go in the same direction. This is called the 1,3-diaxial interaction. The equatorial position for a substituent is then more stable than the axial one and one conformation of the molecule is favoured. For example, if the substituent is a methyl, the difference of energy between the conformations is 7.1kJ/mol, leading to a proportion of 95:5 (equatorial/axial). Larger substituents increase furthermore the proportion of conformers.
The Newman projection can help to visualise this phenomenon. The whole molecule is represented, connecting two Newman projections together.
Several substituents may be bound to a cyclohexane. The influence on the stability of each one is, for most of the substituents, simply additive. In the case of two substituents, one will be placed in an equatorial position anyway because it requires less energy than on an equatorial spot (as we just saw for the case of a single substituent). This substituent is the biggest one to minimize the steric hindrance. The second one, smaller, is either on an axial or an equatorial spot, depending on the connectivity of the molecule. If both groups are in equatorial positions, the equilibrium is further balanced towards this conformation. If the second substituent is on an axial spot, the equilibrium moves toward the 50:50 composition, the effects of the substituents acting against one each other.
Let’s see it through some examples. As explained before, the presence of a methyl on the cyclohexane favours the equatorial conformation by 7.1 kJ/mol. The presence of a second methyl group on an equatorial spot will increase the proportion of the equatorial conformation by the same amount (7.1kJ/mol). The total energetic advantage of the equatorial conformation reaches 14.2kJ/mol and the proportion of the equatorial conformer increases furthermore towards 99:1. This conformation is also noted trans because the groups point in opposite direction. The name of this molecule is indeed trans-1,4-dimethylcyclohexane.
If the second methyl group was on an axial spot, the two conformers are equivalents (the axial methyl becomes equatorial and vice versa). The total energetic advantage of one of the conformer is indeed 7.1kJ/mol-7.1kJ/mol=0kJ/mol.
If the second group was a chlorine atom instead of a methyl group, we proceed the same way to know which conformer is favoured. A single chlorine group stabilizes the equatorial conformation by 2.2kJ/mol. If both substituents were equatorial, the total energetic advantage of the equatorial conformer is 7.1kJ/mol (from CH3) +2.2kJmol (from Cl)=9.3kJ/mol. This molecule is thus more stable than the methylcyclohexane.
If the chlorine atom was axial, the total energetic advantage of the conformer wherein the methyl is equatorial is 7.1kJ/mol-2.2kJ/mol=4.9kJ/mol. The conformer with an equatorial methyl is still more stable than the other conformer but less than the methylcyclohexane.
Note that other interactions may affect the stability of the conformers. When the substituents are on other spots than 1-4, interactions (as steric hindrance, repulsion,…) may decrease or increase the energetic advantage of one conformer on the other one.
Molecules are not limited to one cycle. Several cycles may share carbons. A molecule composed of two hexanes sharing two carbons is called decaline and exists in a trans and a cis form. Cycles do not have to be the same size and a cyclohexane can merge with a cyclopentane.
Two cycles may also be slotted one into each other. This leads to bridged cycles. The norborane is a bridged cyclohexane or two cyclopentane sharing 3 carbons. Carbons wearing the bridge are called bridgeheads.
Polycyclic alkanes reduce the freedom of move of the molecule but it seems that there is no limit to the cycle tension that hydrocarbures can endure, as one can think from the bicyclobutane.
Polycycles with cycles of few carbons (3-4) are in general not naturally produced but may be made through synthesis. All types of carbon skeletons have been made in laboratories. They can have an interest as explosives if they wear nitro croups thanks to their important cycle tension.
Polycycles of larger cycles are often found in natural compounds, giving specific odors, parfums, colors, or very specific roles as do have hormones. | 0.820333 | 3.428812 |
It can sometimes feel like a poetic cliché to even look at the Moon. It seems almost too easy a way to summon cyclicality, illumination, mystery, and even romanticism. The Moon is always shifting through its cycles yet always present and the same; it serves us as a source of light but is actually reflecting that light from somewhere else; every once in a while, an eclipse renders it strange; meanwhile, it has a side that always stays hidden, with an air of mystery almost always categorized as feminine. Italo Calvino, however, uses the moon and other celestial bodies playfully in his short story collection Cosmicomics.
“The Distance of the Moon” is perhaps Calvino’s most well-known story from Cosmicomics, a set of whimsical stories chronicling a history of the cosmos loosely based on scientific facts. The original set of twelve was first published in 1965, just four years before the Moon landing. Each story begins with a short nonfiction portion, as if invoking a Muse, before diving into the realm of imagination. “The Distance of the Moon” begins with the fact that the Moon’s orbit is gradually moving farther away from the Earth; in an interesting phenomenon of reciprocal cause-and-effect, this change is actually a result of the tides, which are themselves caused by the Moon’s gravitational force acting upon the Earth. Building from lineages of both scientists and mythmakers, Calvino draws upon these facts about the Moon’s distance and brings them to their (il)logical extreme.
In William Weaver’s 1968 translation, he opens the fictional portion of the story with an exclamation: “How well I know!—old Qfwfq cried,—the rest of you can’t remember, but I can.” The unpronounceable narrator, Qfwfq, dives into his story as if directly responding to the nonfictional epigraph that came before. In this way, the scientific portion is not so much a citation as it is part of an ongoing conversation about the nature of things. Calvino does not exactly undermine the scientific aspect of it; instead, he’s riffing on the spirit of curiosity, of scientific inquiry and stretches of inference.
The overall premise of the story is that long ago, the Moon drew so close to the high tide that you could row out in a boat and leap across the gap to safely explore the Moon’s surface; we gradually shift more and more into this realm of the fantastic as Qfwfq describes the process of harvesting the scaly, fish-smelling “moon milk” encrusted on the underside of the moon, which is composed of biological debris that floats up from the Earth and gets stuck there. Meanwhile, there is “always a flight of tiny creatures—little crabs, squid, and even some weeds, light and filmy, and coral plants” that either stick to the moon as well or else float in the in-between space, caught between gravitational forces. In a way, through gravity instead of reproduction, they are re-enacting the gradual movement of life out of water (a different Cosmicomics story, “The Aquatic Uncle,” is entirely devoted to playing with this evolutionary narrative).
In describing this, Calvino works with perspective using a cinematic playfulness that remains roughly faithful to the way gravity works while still being entirely impossible:
Seen from the Earth, you looked as if you were hanging there with your head down, but for you, it was the normal position, and the only odd thing was that when you raised your eyes you saw the sea above you, glistening, with the boat and the others upside down, hanging like a bunch of grapes from the vine.
These de-familiarized laws of attraction also play out within a lopsided love triangle. Qfwfq is attracted to his fellow traveler Vhd Vhd, who has a crush on Qfwfq’s cousin, “the Deaf One,” who only has eyes for the Moon herself. Qfwfq registers this similarity between celestial and corporeal gravities; he describes trying to prevent himself from floating away from the boat and reaching for Mrs Vhd Vhd’s breasts, “which were round and firm, and the contact was good and secure and had an attraction as strong as the Moon’s or even stronger,” with a tone that seems to be more about closeness and attachment than about sex.
By taking everything in a stride of playful curiosity, Calvino allows nonfiction and fantasy to live peacefully in the same world, governed by the same spirit of discovery and a willingness to dwell in the unknown. This willingness is also key to the construction of a love story. Each person (or being) cannot be fully known to the other. Characters don’t merge together but are instead suspended in eternal states of movement in relation to each other, simultaneously repetitive and ever-changing just like the orbit of the Moon around the Earth.
In his lecture on the subject of lightness, part of his Six Memos for the Next Millenium, Calvino explores the idea of “literature as an existential function, the search for lightness as a reaction to the weight of living” (translation by Geoffrey Brock, 2016). He writes:
One might say that what strikes the literary imagination about Newton’s theories [on universal gravitation] is not the subjection of all things and people to their own inescapable weight but rather the balance of forces that allows celestial bodies to float in space.
Sometimes the idea of gravity conjures heaviness, but lightness, too, depends on forces of attraction and so do love stories. Calvino’s work reminds us that curiosity itself is a kind of gravity, a pull that is difficult to understand or measure and yet is instinctively, unavoidably felt. Leaving the facts to the scientists, Calvino is more interested in images, stories, and the feelings they provoke. | 0.854172 | 3.521736 |
Astronomers from University of Warwick in Coventry, England, said on April 4, 2019, that they’ve detected
a relatively large fragment from a former planet, orbiting in a disk of debris encircling a dead star. The star is a white dwarf, and it’s located 410 light-years away. The white dwarf should have destroyed its solar system in a system-wide cataclysm that followed its death. But the newly discovered planet fragment is thought to be rich in heavy metals – iron and nickel – which helped it survive destruction. The astronomers said the fragment is orbiting the white dwarf:
… closer than we would expect to find anything still alive.
They also said the planet fragment has a “comet-like tail” of gas, creating a ring within the debris disk. And they said this system offers us a hint as to the future of our own solar system, 6 billion years from now. The discovery was reported in the peer-reviewed journal Science on April 4. These astronomers’ statement explained:
The iron and nickel rich planetesimal survived a system-wide cataclysm that followed the death of its host star, SDSS J122859.93+104032.9. Believed to have once been part of a larger planet, its survival is all the more astonishing as it orbits closer to its star than previously thought possible, going around it once every two hours.
This is the second time astronomers have found a solid planetesimal in a tight orbit around a white dwarf. It’s the first time that scientists have used spectroscopy for this sort of discovery. These astronomers used the Gran Telescopio Canarias in La Palma in Spain’s Canary Island. They examined:
… the debris disk orbiting the white dwarf, formed by the disruption of rocky bodies composed of elements such as iron, magnesium, silicon, and oxygen – the four key building blocks of the Earth and most rocky bodies. Within that disk they discovered a ring of gas streaming from a solid body, like a comet’s tail. This gas could either be generated by the body itself or by evaporating dust as it collides with small debris within the disk.
The astronomers estimate that this body has to be at least a kilometer (.6 miles) in size, but could be as large as a few hundred kilometers in diameter, comparable to the largest asteroids known in our solar system.
According to astronomers’ theories, our sun will become a white dwarf when it has burnt all the thermonuclear fuel (especially the light elements hydrogen and helium) that now enables it to shine. When this happens, our sun is expected to shed its outer layers, these astronomers said, leaving behind a white dwarf:
… a dense core which slowly cools over time. This particular star has shrunk so dramatically that the planetesimal orbits within its sun’s original radius. Evidence suggests that it was once part of a larger body further out in its solar system and is likely to have been a planet torn apart as the star began its cooling process.
The star would have originally been about two solar masses, but now the white dwarf is only 70 percent of the mass of our sun. It is also very small – roughly the size of the Earth – and this makes the star, and in general all white dwarfs, extremely dense.
The white dwarf’s gravity is so strong — about 100,000 times that of the Earth’s — that a typical asteroid will be ripped apart by gravitational forces if it passes too close to the white dwarf.
Co-author Boris Gaensicke, also of University of Warwick, added:
The planetesimal we have discovered is deep into the gravitational well of the white dwarf, much closer to it than we would expect to find anything still alive. That is only possible because it must be very dense and/or very likely to have internal strength that holds it together, so we propose that it is composed largely of iron and nickel.
If it was pure iron it could survive where it lives now, but equally it could be a body that is rich in iron but with internal strength to hold it together, which is consistent with the planetesimal being a fairly massive fragment of a planet core. If correct, the original body was at least hundreds of kilometers in diameter because it is only at that point planets begin to differentiate – like oil on water – and have heavier elements sink to form a metallic core.
The discovery offers a glimpse into the future of our own solar system. Manser said:
As stars age they grow into red giants, which ‘clean out’ much of the inner part of their planetary system. In our solar system, the Sun will expand up to where the Earth currently orbits, and will wipe out Earth, Mercury, and Venus. Mars and beyond will survive and will move further out.
The general consensus is that 5 to 6 billion years from now, our solar system will be a white dwarf in place of the sun, orbited by Mars, Jupiter, Saturn, the outer planets, as well as asteroids and comets. Gravitational interactions are likely to happen in such remnants of planetary systems, meaning the bigger planets can easily nudge the smaller bodies onto an orbit that takes them close to the white dwarf, where they get shredded by its enormous gravity.
Bottom line: Astronomers have identified a heavy metal planet fragment orbiting the white dwarf star SDSS J122859.93+104032.9. The system may give us a glimpse of what our solar system will become 6 billion years from now.
Deborah Byrd created the EarthSky radio series in 1991 and founded EarthSky.org in 1994. Today, she serves as Editor-in-Chief of this website. She has won a galaxy of awards from the broadcasting and science communities, including having an asteroid named 3505 Byrd in her honor. A science communicator and educator since 1976, Byrd believes in science as a force for good in the world and a vital tool for the 21st century. "Being an EarthSky editor is like hosting a big global party for cool nature-lovers," she says. | 0.908783 | 3.741311 |
NASA has awarded a sole source contract to the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado Boulder for the Total and Spectral Solar Irradiance Sensor-2 (TSIS-2). The new sensor provides continuity to data delivered by TSIS-1, which launched in December 2017. LASP will receive funding to build two instruments, the Total Irradiance Monitor (TIM) and Spectral Irradiance Monitor (SIM) and will operate the spacecraft after it launches in 2023.
Posts Tagged: Dan Strain
A type of Martian aurora first identified by NASA’s MAVEN spacecraft in 2016 is actually the most common form of aurora occurring on the Red Planet, according to new results from the mission. The aurora is known as a proton aurora and can help scientists track water loss from Mars’ atmosphere and sheds light on Mars’ changing climate.
Over the past year, NASA’s Parker Solar Probe came closer to the sun than any other object designed and developed by humans—and CU Boulder scientists have been along for the ride. David Malaspina, a LASP Space plasma researcher, is part of a team of CU Boulder scientists who contributed to those early insights. The group designed a signal processing electronics board that is integral to the FIELDS experiment, one of four suites of instruments onboard Parker Solar Probe.
Early one morning in late August 2019, Colorado photographer Glenn Randall hiked several miles to a stream flowing into Lake Isabelle in the Indian Peaks Wilderness. He set up his camera near the stream and began photographing about 20 minutes before sunrise when a golden glow developed at the horizon. It wasn’t until Randall was back at home, however, that he noticed something odd: The sky above the golden glow and its reflection in the water were both a deep violet.
He’s not alone. Photographers across the country have noticed that sunrises and sunsets have become unusually purple this summer and early fall.
Now, LASP researchers have collected new measurements that help to reveal the cause of those colorful displays: an eruption that occurred thousands of miles away on a Russian volcano called Raikoke.
NASA will soon have new eyes on the Sun. Two miniature satellites designed and built at LASP are scheduled to launch later this month on Spaceflight’s SSO-A: SmallSat Express mission onboard a SpaceX Falcon 9 rocket from Vandenberg Air Force Base in California.
The new missions—called the Miniature X-ray Solar Spectrometer-2 (MinXSS-2) and the Compact Spectral Irradiance Monitor (CSIM)—will collect data on the physics of the Sun and its impact on life on Earth.
These “CubeSats,” which are smaller than a microwave oven, are set to blast into a near-Earth orbit alongside more than 60 other spacecraft. According to Spaceflight, SSO-A is the largest dedicated rideshare mission from a U.S.-based launch vehicle to date. | 0.829665 | 3.211181 |
A planet-size object may be orbiting the sun in the icy reaches of the solar system beyond Pluto.
Scientists at the University of Arizona's Lunar and Planetary Laboratory (LPL) have determined that an unseen object with a mass somewhere between that of Earth and Mars could be lurking in the Kuiper Belt, a region beyond Neptune filled with thousands of icy asteroids, comets and dwarf planets.
In In January 2016, a separate group of scientists predicted the existence of a Neptune-size planet orbiting the sun far, far beyond Pluto — about 25 times farther from the sun than Pluto is. This hypothetical planet was dubbed "Planet Nine," so if both predictions are correct, one of these putative objects could be the solar system's 10th planet.
The so-called "planetary-mass object" described by the scientists from LPL appears to affect the orbits of a population of icy space rocks in the Kuiper Belt. Distant Kuiper Belt objects (KBOs) have tilted orbits around the sun. The tilted orbital planes of most KBOs average out to something called the invariable plane of the solar system.
But the orbits of the most distant KBOs tilt away from the invariable plane by an average of 8 degrees, which signals the presence of a more massive object that warps its surroundings with its gravitational field, researchers said in a study due to be published in The Astronomical Journal.
"The most likely explanation for our results is that there is some unseen mass," Kat Volk, a postdoctoral fellow at LPL and the lead author of the study, said in a statement. "According to our calculations, something as massive as Mars would be needed to cause the warp that we measured."
These KBOs act a lot like spinning tops, Renu Malhotra, a professor of planetary sciences at LPL and co-author of the new study, said in the statement.
"Imagine you have lots and lots of fast-spinning tops, and you give each one a slight nudge … If you then take a snapshot of them, you will find that their spin axes will be at different orientations, but on average, they will be pointing to the local gravitational field of Earth," she said. "We expect each of the KBOs' orbital tilt angle to be at a different orientation, but on average, they will be pointing perpendicular to the plane determined by the sun and the big planets."
It may sound a lot like the mysterious Planet Nine, but the researchers say the so-called planetary-mass object is too small, and too close, to be the same thing. Planet Nine lies 500 to 700 astronomical units (AU) from Earth, and its mass is about 10 times that of Earth. (One AU is the average distance at which Earth orbits the sun — 93 million miles, or 150 million kilometers. Pluto orbits the sun at a maximum distance of just less than 50 AU.)
"That is too far away to influence these KBOs," Volk said. "It certainly has to be much closer than 100 AU to substantially affect the KBOs in that range."
Though no planet-size objects have been spotted in the Kuiper Belt so far, the researchers are optimistic that the Large Synoptic Survey Telescope (LSST), which is currently under construction in Chile, will help find these hidden worlds. "We expect LSST to bring the number of observed KBOs from currently about 2,000 to 40,000," Malhotra said.
"There are a lot more KBOs out there — we just have not seen them yet," Malhotra added. "Some of them are too far and dim even for LSST to spot, but because the telescope will cover the sky much more comprehensively than current surveys, it should be able to detect this object, if it's out there." | 0.906586 | 3.914701 |
“It’s completely silly to search the galaxy with radio telescopes for a radio civilization. In my mind, it’s as chuckleheaded as deciding you’re going to search the galaxy for a decent Italian restaurant.” –Terence McKenna
If it had been up to him, Terence McKenna wouldn’t have built satellites or mechanisms to search the expanse of the universe for extraterrestrials. He would have rummaged around Earth to find them first. Given the complex and mysterious nature of fungi, McKenna thought they he had.
McKenna believed that mushrooms were extraterrestrials designed to “travel across the gulf between the stars.”
Across the Universe
Fungi, according to McKenna, “look sort of manufactured.”
Taking McKenna’s theory seriously, mushrooms seem to be high-tech bio-design. Their spores are so electron-dense that they’re actually closer to being metal, which shields them from the vacuum of space and from radiation. The outer layer of their spores has a purple hue, which naturally allows them to deflect ultraviolet light.
Their cell walls contain chitin which is the same material that makes up the hard shell of insects, butterfly wings, and a peacock’s plumage. If you were to look at butterfly wings under a microscope, they would look like a series of plates layered on top of each other the roof of a house and glow the colors of the rainbow. Regardless of whether we use telescopes that project us into space or microscopes that show us the most inner innards of life itself–everything is connected. We’re all doing the same thing.
We’ve only been studying DNA since the 1950s so we are just at the beginning of understanding the structures of life and how they function. Extend those few back over hundreds, even thousands of years. Fungi seem to be engineered to be the perfect way to travel. So, let’s take a trip across the universe through the perspective of a mushroom.
The Imaginary Line Between Earth and Outer Space
No definitive boundary exists between the atmosphere and outer space. It just gradually gets thinner and fades away. This “imaginary boundary” is called the Kármán line.
Studies of the biology of the upper atmosphere date back to the late 1800s. They were done by releasing balloons, a rather whimsical image ripe for an imagination like Fornasetti’s considering who they floated amongst. The organisms that the balloons gathered included fungi and spore-forming bacteria.
Using meteorological rockets instead of fanciful balloons, later studies found basic life forms as high as 77km, the highest altitude from which we have isolated microbes. Which means that fungi spores are hanging out in the atmosphere.
The Hidden Kingdom
Hidden kingdoms unto themselves, Paul Stamets, a well-known mycologist, claims that there are an estimated two million species of fungi and only 150,000 of them form mushrooms. What we see above ground are the reproductive bodies of a larger network underground. They shed about three million spores a minute for two weeks.
Mushrooms transmit information across the underground mycelium network using the same neurotransmitters that our brains do: the chemicals that produce our ability to think. “They’re sentient, aware, and highly evolved,” says Paul Stamets. He calls this network the earth’s internet.
This network could be the foundation for all life, including our own.
We Evolved from Mushrooms
One of the big differences between animals and fungi is that we have stomachs inside our bodies. About 600 million years ago, the “branch of fungi leading to animals evolved to capture nutrients by surrounding their food with cellular sacs–essentially primitive stomachs.” As our little organism ancestors evolved, they developed outer layers of cells–skins!–to keep in moisture and protect the organism.
Without fungi, life would not have persisted on earth.
As we know it, life on earth, as we understand it, largely evolved from two asteroid impacts, one that occurred 250,000 years ago and 64 million years ago that supposedly cleared out the entire kingdom of dinosaurs. Fungi were thriving. The life forms that teamed fungi survived and flourished.
In other words, fungi were the world’s first superhero.
We understand there to be seven microbes (microorganisms), fungi being one of them, that might have crashed to Earth on safe containers. The idea that life originated due to asteroids, stardust, planetary fragments, etc., is not new. But let’s suspend our disbelief and imagine that life crashed to Earth from outer space at a time when planets and stars were in closer proximity than today.
Coming from the Greek words for “all” and “seed,” panspermia is a hypothesis that life exists throughout the universe and distributes itself on space dust, meteoroids, asteroids, comets, planetoids, and even on spacecraft. There are a few different subcategories of theories stemming from panspermia, but lithopanspermia proposes that organisms traveled to other planets on rocks through interplanetary or interstellar space.
Why build a spaceship when you already have flying objects everywhere?
Research from Princeton University published in 2012 confirms a high probability that life might have spread during our solar system’s infancy at a time when all of the planetary and star bodies lived more or less in the same condominium. The evidence put forth by the study is the strongest support of lithopanspermia to date which would mean basic life forms like fungi could have traveled across the universe on material like planetary fragments. One could even call these fragments “vessels.”
The logistics aside, which concern velocity, researchers reported that our solar system and its neighbor could have swapped material 100 trillion times over.
A paper in 2009 determined that microorganisms could survive in space on solid matter depending on its size. They could endure approximately 12 to 500 million years.
Now, we suppose that life on Earth happened after surface water miraculously also came into existence. If that is the case, “…there were possibly about 400 million years when life could have journeyed from the Earth to another habitable world, and vice versa. Life on Earth may have originated beyond our solar system.
Somewhere in the universe of 1967, in the wee hours of the morning, John Lennon was in bed and irritated. After having an argument with his wife, she had drifted to sleep. Lennon, however, could not. In-between wakefulness and sleep, words flooded into his consciousness “like an endless stream.” They wouldn’t stop, drove him out of bed, down the stairs–“Words are flying out like endless rain into a paper cup, They slither while they pass”–to put them down on paper before they slipped away “…across the universe.”
We’re in space, already. The line between the outer and inner is imaginary.
“Hey guys, we are the vehicle, get it? We call it consciousness–the ultimate technology.” –Love, The ‘Shrooms | 0.875583 | 3.440711 |
Our goal is to study the processes that lead to the formation of low mass stars, brown dwarfs and planets and to characterize the physical properties of these objects in various evolutionary stages. Low mass stars and brown dwarfs are likely the most numerous type of objects in our Galaxy but due to their low intrinsic luminosity they are not so well known. We aim to study the frequency, multiplicity and spatial distribution of these objects in the solar neighbourhood and in nearby star forming regions and stellar clusters in order to better understand the mechanism of formation, characterise their optical and infrared properties and establish the relation between spectral properties, mass and luminosity.. Most of our effort will be dedicated to push toward lower mass limits the detection of these astros either bounded to stars and brown dwarfs and/or free-floating in interstellar space. The lowest mass objects display a lower intrinsic luminosity and cooler effective temperatures thus they are remarkably difficult to detect using direct imaging techniques. However, these techniques allow a full photometric and spectroscopic characterization and a best determination of their physical and chemical properties. We also aim to investigate the presence of planets around low mass stars using radial velocity measurements and techniques for high spatial resolution imaging. We will develop ultrastable spectrographs for large telescopes and systems for ultrafast imaging. With the spectrographs it would be possible to detect planets of similar mass to the Earth around G, K and M-type stars. The goal is to establish the frequency of these planets in stars of the solar neighbourhood and characterise the properties of the associated planetary systems.
Members of the project
Highlights and results
- The optical and near-infrared sequence of 10 Myr-old L dwarfs in the nearest OB association to the Sun, Upper Scorpius
- The lithium depletion boundary of the Hyades cluster.
New Isolated Planetary-mass Objects and the Stellar and Substellar Mass Function of the σ Orionis Cluster
We report on our analysis of the VISTA Orion ZY JHKs photometric data (completeness magnitudes of Z = 22.6 and J = 21.0 mag) focusing on a circular area of 2798.4 arcmin2 around the young σ Orionis star cluster (~3 Myr, ~352 pc, and solar metallicity). The combination of the VISTA photometry with optical, WISE and Spitzer data allows us to identifyPeña-Ramírez, K. et al.
Polarisation of very-low-mass stars and brown dwarfs. I. VLT/FORS1 optical observations of field ultra-cool dwarfs
Context: Ultra-cool dwarfs of the L spectral type (T_eff = 1400-2200 K) are known to have dusty atmospheres. Asymmetries of the dwarf surface may arise from rotationally-induced flattening and dust-cloud coverage, and may result in non-zero linear polarisation through dust scattering. Aims: We aim to study the heterogeneity of ultra-cool dwarfs'Goldman, B. et al.
Further investigation of white dwarfs in the open clusters NGC 2287 and NGC 3532
We report the results of a CCD imaging survey, complemented by astrometric and spectroscopic follow-up studies, that aims to probe the fate of heavy-weight intermediate-mass stars by unearthing new, faint, white dwarf members of the rich, nearby, intermediate-age open clusters NGC 3532 and NGC 2287. We identify a total of four white dwarfs withDobbie, P. D. et al. | 0.869269 | 4.027203 |
Beta Serpentis, Latinized from β Serpentis, is a binary star system in the constellation Serpens, in its head (Serpens Caput). It is visible to the naked eye with a combined apparent visual magnitude of +3.65. Based upon an annual parallax shift of 21.03 mas as seen from Earth, it is located around 155 light years from the Sun. The system is a member of the Ursa Major Moving Group.
The visual magnitude +3.68 primary, component A, is either an ordinary A-type main-sequence star or somewhat evolved subgiant with a stellar classification of A2 V or A2 IV, respectively. The star is about 267 million years old with nearly double the mass of the Sun. It is spinning rapidly with a projected rotational velocity of 207 km/s.
The secondary component, visual magnitude 9.7 B, lies at an angular separation of 30.6 arc seconds. It is a main-sequence star with a class of K3 V.
There is a magnitude +10.98 visual companion, designated component C, located 202 arcseconds away.
It was a member of indigenous Arabic asterism al-Nasaq al-Sha'āmī, "the Northern Line" of al-Nasaqān "the Two Lines", along with β Her (Kornephoros), γ Her (Hejian, Ho Keen) and γ Ser (Zheng, Ching).
According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, al-Nasaq al-Sha'āmī or Nasak Shamiya were the title for three stars :β Ser as Nasak Shamiya I, γ Ser as Nasak Shamiya II, γ Her as Nasak Shamiya III (exclude β Her).
In Chinese, 天市右垣 (Tiān Shì Yòu Yuán), meaning Right Wall of Heavenly Market Enclosure, refers to an asterism which is represent eleven old states in China which is marking the right borderline of the enclosure, consisting of β Serpentis, β Herculis, γ Herculis, κ Herculis, γ Serpentis, δ Serpentis, α Serpentis, ε Serpentis, δ Ophiuchi, ε Ophiuchi and ζ Ophiuchi. Consequently, β Serpentis itself is known as 天市右垣五 (Tiān Shì Yòu Yuán wu, English: the Fifth Star of Right Wall of Heavenly Market Enclosure), represent Zhou (周) (possibly Chow, the dynasty in China), together with η Capricorni and 21 Capricorni in Twelve States (asterism).
- ^ a b c d e f van Leeuwen, F. (2007), "Validation of the new Hipparcos reduction", Astronomy and Astrophysics, 474 (2): 653–664, arXiv:0708.1752 , Bibcode:2007A&A...474..653V, doi:10.1051/0004-6361:20078357.
- ^ a b c Lutz, T. E.; Lutz, J. H. (June 1977), "Spectral classification and UBV photometry of bright visual double stars", Astronomical Journal, 82: 431–434, Bibcode:1977AJ.....82..431L, doi:10.1086/112066.
- ^ a b c d Eggleton, P. P.; Tokovinin, A. A. (2008), "A catalogue of multiplicity among bright stellar systems", Monthly Notices of the Royal Astronomical Society, 389 (2): 869, arXiv:0806.2878 , Bibcode:2008MNRAS.389..869E, doi:10.1111/j.1365-2966.2008.13596.x.
- ^ a b c d e Jones, Jeremy; et al. (November 2015), "The Ages of A-Stars. I. Interferometric Observations and Age Estimates for Stars in the Ursa Major Moving Group", The Astrophysical Journal, 813 (1): 18, arXiv:1508.05643 , Bibcode:2015ApJ...813...58J, doi:10.1088/0004-637X/813/1/58, 58.
- ^ a b Gray, R. O.; Garrison, R. F. (December 1987), "The Early A-Type Stars: Refined MK Classification, Confrontation with Stroemgren Photometry, and the Effects of Rotation", Astrophysical Journal Supplement, 65: 581, Bibcode:1987ApJS...65..581G, doi:10.1086/191237.
- ^ Gontcharov, G. A. (November 2006), "Pulkovo Compilation of Radial Velocities for 35495 Hipparcos stars in a common system", Astronomy Letters, 32 (11): 759–771, arXiv:1606.08053 , Bibcode:2006AstL...32..759G, doi:10.1134/S1063773706110065.
- ^ a b King, Jeremy R.; et al. (April 2003), "Stellar Kinematic Groups. II. A Reexamination of the Membership, Activity, and Age of the Ursa Major Group", The Astronomical Journal, 125 (4): 1980–2017, Bibcode:2003AJ....125.1980K, doi:10.1086/368241.
- ^ a b c d e David, Trevor J.; Hillenbrand, Lynne A. (2015), "The Ages of Early-Type Stars: Strömgren Photometric Methods Calibrated, Validated, Tested, and Applied to Hosts and Prospective Hosts of Directly Imaged Exoplanets", The Astrophysical Journal, 804 (2): 146, arXiv:1501.03154 , Bibcode:2015ApJ...804..146D, doi:10.1088/0004-637X/804/2/146.
- ^ "bet Ser". SIMBAD. Centre de données astronomiques de Strasbourg. Retrieved 2017-09-22.
- ^ Mason, B. D.; et al. (2014), The Washington Visual Double Star Catalog, Bibcode:2001AJ....122.3466M, doi:10.1086/323920
- ^ Kunitzsch, P.; Smart, T. (2006), A Dictionary of Modern Star names: A Short Guide to 254 Star names and Their Derivations (Second Revised ed.), Cambridge, MA: Sky Publishing, p. 31, ISBN 1-931559-44-9.
- ^ Allen, R. H. (1963) , "Hercules", Star Names: Their Lore and Meaning (Dover ed.), New York, NY: Dover Publications Inc, p. 243, ISBN 0-486-21079-0, retrieved 2017-09-22.
- ^ Rhoads, Jack W. (November 15, 1971), Technical Memorandum 33-507-A Reduced Star Catalog Containing 537 Named Stars (PDF), California Institute of Technology: Jet Propulsion Laboratory.
- ^ (in Chinese) 中國星座神話, written by 陳久金. Published by 台灣書房出版有限公司, 2005, ISBN 978-986-7332-25-7.
- ^ Allen, R. H. (1963) , "Serpens", Star Names: Their Lore and Meaning (Dover ed.), New York, NY: Dover Publications Inc, p. 376, ISBN 0-486-21079-0, retrieved 2017-09-22.
- ^ (in Chinese) AEEA (Activities of Exhibition and Education in Astronomy) 天文教育資訊網 2006 年 6 月 24 日
- ^ (in Chinese) English-Chinese Glossary of Chinese Star Regions, Asterisms and Star Name Archived 2010-08-10 at the Wayback Machine., Hong Kong Space Museum. Accessed on line November 23, 2010.
- ^ Allen, R. H. (1963) , "Capricornus", Star Names: Their Lore and Meaning (Dover ed.), New York, NY: Dover Publications Inc, p. 142, ISBN 0-486-21079-0, retrieved 2017-09-22. | 0.836702 | 3.723119 |
The Moon and Jupiter will share the same right ascension, with the Moon passing 2°41' to the north of Jupiter. The Moon will be 18 days old.
From Fairfield, the pair will be visible in the morning sky, becoming accessible around 22:44, when they rise to an altitude of 7° above your eastern horizon. They will then reach its highest point in the sky at 03:38, 41° above your southern horizon. They will be lost to dawn twilight around 06:29, 27° above your south-western horizon.
The Moon will be at mag -12.3, and Jupiter at mag -2.3, both in the constellation Virgo.
The pair will be too widely separated to fit within the field of view of a telescope, but will be visible to the naked eye or through a pair of binoculars.
A graph of the angular separation between the Moon and Jupiter around the time of closest approach is available here.
The positions of the two objects at the moment of conjunction will be as follows:
|Object||Right Ascension||Declination||Constellation||Magnitude||Angular Size|
The coordinates above are given in J2000.0. The pair will be at an angular separation of 125° from the Sun, which is in Capricornus at this time of year.
|The sky on 15 February 2017|
18 days old
All times shown in EST.
The circumstances of this event were computed using the DE405 planetary ephemeris published by the Jet Propulsion Laboratory (JPL).
This event was automatically generated by searching the ephemeris for planetary alignments which are of interest to amateur astronomers, and the text above was generated based on an estimate of your location.
|26 Sep 2016||– Jupiter at solar conjunction|
|17 Feb 2017||– Jupiter at aphelion|
|07 Apr 2017||– Jupiter at opposition|
|26 Oct 2017||– Jupiter at solar conjunction| | 0.895513 | 3.446494 |
Astronomers have captured 15 new images of the inner rims of planet-forming discs hundreds of light-years away that shed new light on how planetary systems are formed.
Made of dust and gas – and similar in shape to a music record – these discs form around young stars. Previous pictures were taken with the largest single-mirror telescopes, which couldn’t capture their finest details.
“In these pictures, the regions close to the star, where rocky planets form, are covered by only few pixels,” says Jacques Kluska from KU Leuven in Belgium, lead author of a paper in Astronomy & Astrophysics. “We needed to visualise these details to be able to identify patterns that might betray planet formation and to characterise the properties of the discs.”
To do this, Kluska and colleagues used infrared interferometry. They first combined the light collected by four telescopes at the ESO’s Very Large Telescope Observatory in Chile, then recovered the details of the discs with a mathematical reconstruction technique similar to that used to capture the first image of a black hole.
“We had to remove the light of the star, as it hindered the level of detail we could see in the discs”, Kluska says.
Distinguishing details at the scale of the orbits of rocky planets like Earth or Jupiter, as can be seen in the images, is equivalent to being able to see a human on the Moon or distinguish a hair from 10 kilometres away, notes co-author Jean-Philippe Berger of the Université Grenoble-Alpes, France.
“Infrared interferometry is becoming routinely used to uncover the tiniest details of astronomical objects. Combining this technique with advanced mathematics finally allows us to turn the results of these observations into images.”
Curated content from the editorial staff at Cosmos Magazine.
Read science facts, not fiction...
There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today. | 0.80601 | 3.380913 |
GENERAL RELATIVITY & COSMOLOGY
for Undergraduates Professor John W. Norbury
This book written by Professor John Norbury explains general relativity and concepts of cosmology in a clear way and focuses on introducing these subjects at an undergraduate level. Professor John Norbury’s research interests are in the areas of theoretical nuclear and particle physics and cosmology. He did his post-doctoral work at NASA Langley Research Center on the problem of protecting astronauts from cosmic radiation. He has continued doing contract work for NASA ever since, and is responsible for calculating the effects of certain nuclear and particle reactions that occur when a cosmic ray particle hits a spacecraft wall. He has worked on a variety of research problems including electromagnetic interactions in relativistic nucleus-nucleus collisions, Higgs boson and graviton production in nuclear collisions, relativistic quark models and certain problems in cosmology and quantum gravity. Apart from this book, he has published many books on quantum mechanics and quantum field theory for undergraduates.
Quarks,leptons and the big bang
Quarks, leptons and the big bang is a clear, readable and self contained introduction to particle physics and related areas of cosmology. It bridges the gap between non technical popular accounts and textbooks for advanced students. The book concentrates on presenting the subject from the modern perspective or quarks, leptons and the forces between them.This approach enables readers to grasp the essential concepts more easily than the traditional historical approach involving the complex interaction of hadrons. It then moves on to applying these ideas to modern cosmology.
The Exploration of Near-Earth Objects
Comets and asteroids are in some sense the fossils of the solar system. They have avoided most of the drastic physical processing that shaped the planets and thus represent more closely the properties of the primordial solar nebula. What processing has taken place is itself of interest in decoding the history of our solar neighborhood. Near-Earth objects are also of interest because one or more large ones have been blamed for the rare but devastating events that caused mass extinctions of species on our planet, as attested by recent excitement over the impending passage of asteroid 1997 XF11. The comets and asteroids whose orbits bring them close to Earth are clearly the most accessible to detailed investigation, both from the ground and from spacecraft. When nature kindly delivers the occasional asteroid to the surface of Earth as a meteorite, we can scrutinize it closely in the laboratory; a great deal of information about primordial chemical composition and primitive processes has been gleaned from such objects. This report reviews the current state of research on near-Earth objects and considers future directions. Atten- tion is paid to the important interplay between ground-based investigations and spaceborne observation or sample collection and return. This is particularly timely since one U.S. spacecraft is already on its way to rendezvous with a near-Earth object, and two others plus a Japanese mission are being readied for launch. In addition to scientific issues, the report considers technologies that would enable further advances in capability and points out the possibilities for including near-Earth objects in any future expansion of human exploration beyond low Earth orbit.
The Foundations Of Celestial Mechanics
This book covers classical mechanics from the very basics and then proceeds to celestial mechanics covering 2 body problems and dynamics of more than two bodies. Basics of perturbation theory are also covered. | 0.925745 | 3.770938 |
Space Telescope Science
Imagine living on a planet with seasons so unpredictable you would hardly know what to wear: Bermuda shorts or a heavy overcoat! That's the situation on a weird world found by NASA's planet-hunting Kepler space telescope. The planet, designated Kepler-413b, is located 2,300 light-years away in the constellation Cygnus. It circles a close pair of orange and red dwarf stars every 66 days. But what makes this planet very unusual is that it wobbles, or precesses, wildly on its spin axis, much like a child's top. The planet's orbit is tilted with respect to the plane of the binary star's orbit. Over an 11-year period, the planet's orbit too would appear to wobble as it circles around the star pair. All of this complex movement leads to rapid and erratic changes in seasons. | 0.865706 | 3.039434 |
After one of the first missions, when an astronaut brought along a store-bought camera, the importance of photography in space was realized. This prompted a collaboration with ZEISS to develop photography systems specifically designed for space and its extreme conditions. To achieve this ambition, new technologies were needed addressing the unknown:
- How would cameras and lenses function in extreme temperature fluctuations?
- Would the lens optics change in zero gravity?
- What mechanical changes would be needed for use in space?
In 1962, an image of the Earth from above was still a true novelty.
The first attempts to take pictures of our planet from space were stunning. As there was little to no experience with photography in space, each mission in the race to the moon brought new insights, but also challenges. Space photography was in its infancy when the Mercury Atlas 8 space mission commenced.
A Hasselblad 500C with a ZEISS Planar 2.8/80 lens with only a few small modifications was taken into orbit for the first time to study and document our planet.
On 21 December 1968, Apollo 8 became the first manned expedition to leave the earth’s orbit and travel toward to the moon.
The mission was to orbit the moon, photograph the lunar landscape and identify suitable future landing sites. Until then, people had only speculated what the moon’s surface might be like.
Something transpired during the fourth lunar orbit on 24 December that wasn’t on the flight plan: as the spacecraft emerged from the dark side of the moon, the astronauts beheld the earth rising above the lunar horizon. They hurried to capture this stunning image and took the first color photograph of the earth from the moon. This image, “Earthrise”, of a small blue planet floating in the darkness of space, forever changed the world’s perspective of the fragile, precious planet we call home.
On July 20, 1969, a collective dream became reality, with a footprint symbolizing this achievement: on that day, man set foot on the moon for the first time.
The limits of what seemed possible were now redefined. More than 500 million people around the world watched this first step and were awed by the images brought back to earth from the lunar surface.
ZEISS designed the Biogon 5.6/60 wide-angle lens specifically for the moon landing. The goal was for the photographs to capture the moon's surface with excellent edge-to-edge contrast and maximum definition. The Hasselblad Data Camera was fitted with a glass Reseau plate, which created cross marks on the images during exposure. These distinctive crosshatches made it possible to calibrate distances and heights enabling size-ratio analyses of objects on the moon.
ZEISS conducted thorough research and created a total of eight lens models, which were used in the Apollo program. The challenges of using camera lenses in space were addressed by:
- The cavities were opened up on all the lenses
- the apertures and focus rings altered to make them easier to use while wearing the thick gloves of their space suits
- the lenses were not edged to edge coated to prevent outgassing
- A refractive silver coating made the lenses resistant to fluctuating temperature changes outside the spacecraft
- A black coating to prevent reflections when taking photographs of objects on the outside.
Who developed these impressive camera lenses to use in space?
Many of the preeminent achievements are thanks in large part to Dr. Erhard Glatzel, and his team, Johannes Berger and Günther Lange. In the 1960s, he was one of the leading scientists and managers in the lens design department at ZEISS in Oberkochen, Germany. His creations were world-renowned, including the ZEISS Hologon and the ZEISS Planar 0.7/50. In 1966, the ZEISS Planar 0.7/50 was developed to be used in very dark circumstances. The lens was so powerful that it was used later in 1973 to film scenes lit entirely by candlelight in the movie Barry Lyndon, marking the first time in film history that it was possible to shoot without using artificial light.
In honor of the accomplishment in designing special lenses for the moon missions, Dr. Erhard Glatzel received the Apollo Achievement Award. Under his leadership, ZEISS developed more than 100 lens designs.
A total of 12 cameras were used on the moon and left behind by the crews of the landing modules to save weight upon departure.
During Apollo 17, currently the final manned mission to the moon, the astronauts captured spectacular panoramic photographs of the surreal lunar landscape. During this mission, the last of these 12 cameras was left behind on the dusty ground, with the lens pointed at the zenith. The reason? If an astronaut ever returns to the landing spot of this mission, analysis could be performed on the lens to measure the impact of cosmic solar radiation. | 0.825082 | 3.366149 |
Welcome back to Messier Monday! In our ongoing tribute to the great Tammy Plotner, we take a look at the Messier 23 open star cluster. Enjoy!
Back in the 18th century, famed French astronomer Charles Messier noted the presence of several “nebulous objects” in the night sky. Having originally mistaken them for comets, he began compiling a list of these objects so that other astronomers wouldn’t make the same mistake. Consisting of 100 objects, the Messier Catalog has come to be viewed as a major milestone in the study of Deep Space Objects.
One of these objects is Messier 23 (aka. NGC 6494), a large open star cluster that is located in the constellation Sagittarius. Given its luminosity, it can be found quite easily in the rich star fields of the summer Milky Way using small telescopes and even binoculars.
Located some 2,150 light years (659 Parsecs) away from Earth, this vast cloud of 176 confirmed stars stretches across 15 to 20 light years of space. At an estimated 220 to 300 million years old, Messier 23 is on the “senior citizen” list of galactic open clusters in our galaxy. At this age, its hottest stars reach spectral type B9, and it even contains a few blue straggler candidates.
Given that M23 has spent many centuries sweeping through the interstellar medium, astronomers have wondered how this would affect its metal content. Using UBV photometry, astronauts examined the metallicity of M23, and determined that it had no discernible effect. As W.L. Sanders wrote of the cluster in 1990:
“UBV photometric observations of 176 stars in the galactic cluster NGC 6494 are presented and analyzed. The effect of a gas poor environment on the metal abundance of NGC 6494 is studied. It is determined that the metallicity of NGC 6494, which has a delta(U – B) value = + 0.02, is not affected by the interarm region in which it dwelled.”
At the same time, astronomers have discovered that some of M23’s older stars – the red giants – are suffering mass loss. As G. Barbaro (et al.) of the Istituto di Fisica dell’Universita put it in 1969:
“A statistical research on evolved stars beyond hydrogen exhaustion is performed by comparing the H-R diagrams of about 60 open clusters with a set of isochronous curves without mass loss derived from Iben’s evolutionary tracks and time scales for Population I stars. Interpreting the difference in magnitude between the theoretical positions thus calculated and the observed ones as due to mass loss, when negative, the results indicate that this loss may be conspicuous only for very massive and red stars. However, a comparison with an analogous work of Lindoff reveals that the uncertainties connected with the bolometric and color corrections may invalidate by a large amount the conclusions which might be drawn from such research.”
However, the most recent studies show that we have to determine radial velocities before we can really associate red giants as being cluster members. J.C. Mermilliod of Laboratoire d’Astrophysique de l’Ecole wrote in his 2008 study, “Red giants in open clusters“:
“The present material, combined with recent absolute proper motions, will permit various investigation of the galactic distribution and space motions of a large sample of open clusters. However, the distance estimates still remain the weakest part of the necessary data. This paper is the last one in this series devoted to the study of red giants in open clusters based on radial velocities obtained with the CORAVEL instruments.”
History of Observation:
This neat and tidy galactic star cluster was one of the original discoveries of Charles Messier. As he recorded of the cluster when first viewing it, which occurred on June 20th, 1764:
“In the night of June 20 to 21, 1764, I determined the position of a cluster of small stars which is situated between the northern extremity of the bow of Sagittarius and the right foot of Ophiuchus, very close to the star of sixth magnitude, the sixty-fifth of the latter constellation [Oph], after the catalog of Flamsteed: These stars are very close to each other; there is none which one can see easily with an ordinary refractor of 3 feet and a half, and which was taken for these small stars. The diameter of all is about 15 minutes of arc. I have determined its position by comparing the middle with the star Mu Sagittarii: I have found its right ascension of 265d 42′ 50″, and its declination of 18d 45′ 55″, south.”
While William Herschel did not publish his observations of Messier’s objects, he was still an avid observer. So of course, he had to look at this cluster, and wrote the following observations in his personal notes:
“A cluster of beautiful scattered, large stars, nearly of equal magnitudes (visible in my finder), it extends much farther than the field of the telescope will take in, and in the finder seems to be a nebula of a lengthened form extending to about half a degree.”
In July of 1835, Admiral Smyth would make an observation of Messier 23 and once again add his colorful remarks to the timeline:
“A loose cluster in the space between Ophiuchus’s left leg and the bow of Sagittarius. This is an elegant sprinkling of telescopic stars over the whole field, under a moderate magnifying power; the most clustering portion is oblique, in the direction sp to nf [south preceding to north following, SW to NE], with a 7th-magnitude star in the latter portion. The place registered it that of a neat pair, of the 9th and 10th magnitudes, of a lilac hue, and about 12″ apart. This object was discovered by Messier 1764, and it precedes a rich out-cropping of the Milky Way. The place is gained by differentiating the cluster with Mu Sagittarii, from which it bears north-west, distant about 5 deg, the spot being directed to by a line from Sigma on the shoulder, through Mu at the tip of the bow.”
Remember when observing Messier 23 that it won’t slap you in the face like many objects. Basically, it looks like a stellar scattering of freckles across the face of the sky when fully-resolved. It’s actually one of those objects that’s better to view with binoculars and low power telescopes.
Locating Messier 23:
M23 can be easily found with binoculars about a finger’s width north and two finger widths west of Mu Sagittarii. Or, simply draw a mental line between the top star in the teapot lid (Lambda) and Xi Serpentis. You’ll find a slight compression in the star field about halfway between these two stars that shows up as an open cluster with binoculars.
Using a finderscope, the object will appear nicely as a hazy spot. And for those using telescopes of any size, you’ll need to use fairly low magnification to help set this cluster apart from the surrounding star field, and it will resolve well to almost all instruments.
And here are the quick facts on this object to help you get started:
Object Name: Messier 23
Alternative Designations: M23, NGC 6494
Object Type: Open Star Cluster
Right Ascension: 17 : 56.8 (h:m)
Declination: -19 : 01 (deg:m)
Distance: 2.15 (kly)
Visual Brightness: 6.9 (mag)
Apparent Dimension: 27.0 (arc min)
We have written many interesting articles about Messier Objects here at Universe Today. Here’s Tammy Plotner’s Introduction to the Messier Objects, , M1 – The Crab Nebula, M8 – The Lagoon Nebula, and David Dickison’s articles on the 2013 and 2014 Messier Marathons. | 0.879453 | 3.813424 |
After 14 years of labor, scientists at the CERN laboratory outside Geneva successfully activated the Large Hadron Collider, the world’s largest, most powerful particle collider and, at $8 billion, the most expensive scientific experiment to date.
At 4:28 a.m., Eastern time, the scientists announced that a beam of protons had completed its first circuit around the collider’s 17-mile-long racetrack, 300 feet underneath the Swiss-French border. They then sent the beam around several more times.
“It’s a fantastic moment,” said Lyn Evans, who has been the project director of the collider since its inception in 1994. “We can now look forward to a new era of understanding about the origins and evolution of the universe.”
Eventually, the collider is expected to accelerate protons to energies of seven trillion electron volts and then smash them together, recreating conditions in the primordial fireball only a trillionth of a second after the Big Bang. Scientists hope the machine will be a sort of Hubble Space Telescope of inner space, allowing them to detect new subatomic particles and forces of nature.
An ocean away from Geneva, the new collider’s activation was watched with rueful excitement here at the Fermi National Accelerator Laboratory, or Fermilab, which has had the reigning particle collider.
Several dozen physicists, students and onlookers, and three local mayors gathered overnight to watch the dawn of a new high-energy physics. They applauded each milestone as the scientists methodically steered the protons on their course at CERN, the European Organization for Nuclear Research.
Many of them, including the lab’s director, Pier Oddone, were wearing pajamas or bathrobes or even nightcaps bearing Fermilab “pajama party” patches on them.
Outside, a half moon was hanging low in a cloudy sky, a reminder that the universe was beautiful and mysterious and that another small step into that mystery was about to be taken.
Dr. Oddone, who earlier in the day admitted it was a “bittersweet moment,” lauded the new machine as the result of “two and a half decades of dreams to open up this huge new territory in the exploration of the natural world.”
Roger Aymar, CERN’s director, called the new collider a “discovery machine.” The buzz was worldwide. On the blog “Cosmic Variance,” Gordon Kane of the University of Michigan called the new collider “a why machine.”
Others, worried about speculation that a black hole could emerge from the proton collisions, had called it a doomsday machine, to the dismay of CERN physicists who can point to a variety of studies and reports that say that this fear is nothing but science fiction.
But Boaz Klima, a Fermilab particle physicist, said that the speculation had nevertheless helped create buzz about particle physics. “This is something that people can talk to their neighbors about,” he said.
The only thing physicists agree on is that they do not know what will happen — what laws and particles will prevail — when the collisions reach the energies just after the Big Bang.
“That there are many theories means we don’t have a clue,” said Dr. Oddone. “That’s what makes it so exciting.”
Many physicists hope to materialize a hypothetical particle called the Higgs boson, which according to theory endows other particles with mass. They also hope to identify the nature of the invisible dark matter that makes up 25 percent of the universe and provides the scaffolding for galaxies. Some dream of revealing new dimensions of space-time.
But those discoveries are in the future. If the new collider were a car, then what physicists did Wednesday was turn on an engine that will now warm up for a couple of months before anyone drives it anywhere. The first meaningful collisions, at an energy of five trillion electron volts, will not happen until late fall.
Nevertheless, the symbolism of the moment was not lost on all those gathered here.
Once upon a time the United States ruled particle physics. For the last two decades, Fermilab’s Tevatron, which hurls protons and their mirror opposites, antiprotons, together at energies of a trillion electron volts apiece, was the world’s largest particle machine.
By year’s end, when the CERN collider has revved up to five trillion electron volts, the Fermilab machine will be a distant second. Electron volts are the currency of choice in physics for both mass and energy. The more you have, the closer and hotter you can punch back in time toward the Big Bang.
In 1993, the United States Congress canceled plans for an even bigger collider and more powerful machine, the Superconducting Supercollider, after its cost ballooned to $11 billion. In the United States, particle physics never really recovered, said the supercollider’s former director, Roy F. Schwitters of the University of Texas in Austin. “One nonrenewable resource is a person’s time and good years,” he said.
Dr. Oddone, Fermilab’s director, said the uncertainties of steady Congressional financing made physics in the United States unduly “suspenseful.”
CERN, on the other hand, is an organization of 20 countries with a stable budget established by treaty. The year after the supercollider was killed, CERN decided to build its own collider.
Fermilab and the United States, which eventually contributed $531 million for the collider, have not exactly been shut out. Dr. Oddone said that Americans constitute about a quarter of the scientists who built the four giant detectors that sit at points around the racetrack to collect and analyze the debris from the primordial fireballs.
In fact, a remote control room for monitoring one of those experiments, known inelegantly as the Compact Muon Solenoid, was built at Fermilab, just off the lobby of the main building here.
“The mood is great at this place,” he said, noting that the Tevatron was humming productively and still might find the Higgs boson before the new hadron collider.
Another target of physicists is a principle called supersymmetry, which predicts, among other things, that a vast population of new particle species is left over from the Big Bang and waiting to be discovered, one of which could be the long-sought dark matter.
The festivities started at 2 a.m. Chicago time. Speaking by satellite, Dr. Evans, the collider project director at CERN, outlined the plan for the evening: sending a bunch of protons clockwise farther and farther around the collider, stopping them and checking their orbit, until they made it all the way. He noted that for a previous CERN accelerator it had taken 12 hours. “I hope this will go much faster,” he said.
Twenty minutes later, the displays in the control room showed that the beam had made it to its first stopping point. A few minutes later, the physicists erupted in cheers when their consoles showed that the muon solenoid had detected collisions between the beam and stray gas molecules in the otherwise vacuum beam pipe. Their detector was alive and working.
Finally at 3:28 Chicago time (10:28 a.m. at CERN), the display showed the protons had made it all the way around to another big detector named Atlas.
At Fermilab, they broke out the Champagne. Dr. Oddone congratulated his colleagues around the world. “We have all worked together and brought this machine to life,” he said. “We’re so excited about sending a beam around. Wait until we start having collisions and doing physics.” | 0.850462 | 3.014015 |
This feature highlights a number of meteor showers, comets and asteroids which are visible during the month of July 2010.
July 2010 Highlights * Total Solar Eclipse on the 11th for the South Pacific * Venus, Mars and Saturn close in on each other in the evening sky * Venus passes within 2° of bright star Regulus on the 8th * Mars and Saturn within 1.8° of each other on the 30th * Mercury has a mediocre evening apparition in July/August (great from SH) * Mercury passes within 0.3° of Regulus on the 27th * Comet 10P/Tempel 2 reaches small telescope brightness in the morning sky
Note: If anyone has pictures or observations of these objects/events and want to share them, send me a comment and I’ll post them on the blog.
Venus – Venus is the brightest “star” visible in the early evening (at magnitude -4.2). Low in the west it sets about 2 hours after the Sun. Maximum height above the horizon was reached over a month ago. As a result, Venus will appear to sink lower in the sky every night. Still, it will be well placed for easy observing for the next 2 months. If you are located south of the equator, this is a great apparition and Venus will continue to climb higher till late August. Regardless, of where you are located it will be hard to miss brilliant -4 magnitude Venus in the west an hour or 2 after sunset.
July 10 - Venus within 1.0° of bright star Regulus July 14 - Moon passes within 5.5° of Venus
Mars – Mars moves rapidly from the constellation of Leo and into Virgo this month. Though fading from magnitude +1.3 to +1.5 it is still an obvious red beacon in the southwest right after sundown. It’s brightness is comparable to that of the other bright stars. Mars starts the month 23° from Venus and 15° from Saturn. By the end of the month, Mars will have caught up to Saturn. Venus isn’t far behind and all three planets will share the same part of the sky in August.
July 16 - Moon passes within 5.6° of Mars July 30 - Mars and Saturn within 1.8° of each other
Saturn – This month Saturn is located in Virgo and visible in the southwest during the early evening hours. At magnitude +1.1 it is slightly brighter than Mars. The two will be within 2° of each other at the end of the month. Telescope users should note that Saturn’s rings are still within a few degrees of edge-on.
July 16 - Moon passes within 7.4° of Saturn July 30 - Saturn and Mars within 1.8° of each other
Jupiter – Jupiter once again returns to sight as a brilliant star in the east-southeast before dawn. The magnitude -2.6 planet will get brighter and better place for observing over the next few months. Last year Jupiter made a series of close approaches to Neptune. This year Jupiter will do the same for Uranus. All month long Jupiter will be located within 2-3° of Uranus.
July 3 - Moon passes within 6.5° of Jupiter July 31 - Moon passes within 6.6° of Jupiter
Mercury – Mercury will start the month too close to the Sun for observation. By mid-month, it starts to peak above the western horizon after sundown. The apparition is a great one for southern hemisphere observers but a so-so one for northern observers. The
July 12 - Moon passes within 3.9° of Mercury July 27 - Mercury passes within 0.3° of bright star Regulus
Meteor activity should really pick up in July. The year is usually split in 2 with January through June having low rates with few major showers while July through December (really through the 1st week of January) have high rates with many major showers.
Sporadic meteors are not part of any known meteor shower. They represent the background flux of meteors. Except for the few days per year when a major shower is active, most meteors that are observed are Sporadics. This is especially true for meteors observed during the evening. During JuLy, 10-16 or so Sporadic meteors can be observed per hour from a dark moonless sky.
Major Meteor Showers
No major showers are active this month.
Minor Meteor Showers
Minor showers produce so few meteors that they are hard to notice above the background of regular meteors. Starting this month, info on most of the minor showers will be provided on a weekly basis by Robert Lunsford’s Meteor Activity Outlook.
Additional information on these showers and other minor showers not included here can be found at the following sites: Wayne Hally’s and Mark Davis’s NAMN Notes, and the International Meteor Organization’s 2010 Meteor Shower Calendar.
Naked Eye Comets (V < 6.0)
Binocular Comets (V = 6.0 – 8.0)
Small Telescope Comets (V = 8.0 – 10.0)
Comet 10P/Tempel 2
’10P’ says it all. This was only the 10th comet to be observed at a 2nd apparition meaning we’ve been following this comet for a long time. Discovered by prolific German comet discoverer Ernst Wilhelm Leberecht Tempel in Marseille, France on July 4, 1873, Tempel 2 has been observed at nearly every return since then. The comet’s current orbit brings it to within 1.42 AU of the Sun on July 4 and to within 0.65 AU of Earth in late August.
The comet is currently at a brightness of 9.0 to 9.5 magnitude and should brighten by another half magnitude this month. This is a large diffuse object so it will be more difficult to see than your average 9th magnitude comet or deep sky object. From my moderately light polluted backyard and 12″ telescope, the comet was a difficult object and was estimated to be magnitude 10.0. From a dark site and 30×125 binoculars, the comet was much brighter (magnitude 9.5), larger and easier to see. The added brightness was probably due to the dark site allowing me to see much more of the comet’s coma.
Tempel 2 is a morning object moving from the constellation of Aquarius to Cetus.
A finder chart for Comet McNaught can be found at Comet Chasing.
Comet C/2009 K5 (McNaught)
If you are looking for Comet C/2009 R1 (McNaught), which was a nice bright naked eye comet last month, this Comet McNaught isn’t the comet you’re looking for. C/2009 R1 is now too close to the Sun to be seen. The lesser known, and fainter but more observable, ‘Comet McNaught’ is Comet C/2009 K5 (McNaught). This will probably be the last month to catch a glimpse of this comet in backyard telescopes.
With perihelion back on April 30 of this year at a distance of 1.42 AU from the Sun, C/2009 K5 may still be bright enough to be seen in small backyard telescopes from dark sites. At mid-month it will be located 1.78 AU from the Sun and 2.47 AU from Earth.
Observations over the past month show the comet to be around magnitude 8.5. With the comet in full retreat from the Sun and Earth, it should fade rapidly from here on out. The comet will start the month between 8.5 and 9.0 but should fade to fainter than 10.0 by the end of the month. Due to its located in the northern constellations of Camelopardalis and Lynx, the comet can be seen at all hours of the night from high northern latitudes. It is best in the evening right after the end of twilight.
A finder chart for Comet McNaught can be found at Comet Chasing.
Binocular and Small Telescope Asteroids (V < 9.0)
Ceres is the biggest asteroid in the Main Belt with a diameter of 585 miles or 975 km. It is so big that it is now considered a Dwarf Planet. Classified as a carbonaceous (carbon-rich) Cg-type asteroid, there are suggestions that it may be rich in volatile material such as water. Some even propose that an ocean exists below its surface. Ceres is the other target of NASA’s Dawn spacecraft which is scheduled to visit it in 2015.
This month Ceres will be at opposition and brightest. The asteroid will start the month at magnitude 7.4 and fade to magnitude 8.1 by the end of the month. All month long it will be retrograding on the border of Sagittarius and Ophiuchus. | 0.843384 | 3.299956 |
A montage of Jupiter and its volcanic moon Io, taken during by the New Horizons spacecraft – en route to Pluto – in early 2007. Notice the volcanic plume above Io’s darkened surface. Image via NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute/Goddard Space Flight Center/Cosmos.
When we hear about volcanoes, we naturally tend to think of some of Earth’s most famous ones, including the Hawaiian volcanoes, Krakatoa or Mount St. Helens. Earth is a very volcanically active place; however, it is not the most active in the solar system. That would be Jupiter’s moon Io.
We on Earth first learned about Io’s volcanoes nearly 40 years ago, when NASA’s Voyager 1 spacecraft flew past this Jovian moon. Now, scientists have completed a comprehensive new peer-reviewed report on Io’s volcanoes, first published in The Astrophysical Journal on June 21, 2019, based on ground-based observations. The report covers five years of observations from 2013-2018, using advanced instrumentation on the Keck and Gemini telescopes.
Scientists had already known how volcanically active Io is. Its surface is dotted with hundreds of active volcanoes, despite this moon’s small size and its location at Jupiter’s orbit, much farther from the sun than Earth, in a ... | 0.840616 | 3.188067 |
On the eve of the 3rd anniversary since her nail biting touchdown inside Gale Crater, NASA’s car sized Curiosity Mars Science Laboratory (MSL) rover has discovered a new type of Martian rock that’s surprisingly rich in silica – and unlike any other targets found before.
Excited by this new science finding on Mars, Curiosity’s handlers are now gearing the robot up for her next full drill campaign today, July 31 (Sol 1060) into a rock target called “Buckskin” – which lies at the base of Mount Sharp, the huge layered mountain that is the primary science target of this Mars rover mission.
“The team selected the “Buckskin” target to drill,” says Lauren Edgar, Research Geologist at the USGS Astrogeology Science Center and an MSL science team member, in a mission update.
“It’s another exciting day on Mars!”
See the rover at work reaching out with her robotic arm and drilling into Buckskin, as illustrated in our new mosaics of navcam camera images created by the image processing team of Ken Kremer and Marco Di Lorenzo (above and below). Also featured at Alive Universe Images – here.
For about the past two months, the six wheeled robot has been driving around and exploring a geological contact zone named “Marias Pass” – an area on lower Mount Sharp, by examining the rocks and outcrops with her suite of state-of-the-art science instruments.
The goal is to provide geologic context for her long term expedition up the mountains sedimentary layers to study the habitability of the Red Planet over eons of time.
Data from Curiosity’s “laser-firing Chemistry & Camera (ChemCam) and Dynamic Albedo of Neutrons (DAN), show elevated amounts of silicon and hydrogen, respectively,” in certain local area rocks, according to the team.
Silica is a rock-forming compound containing silicon and oxygen, commonly found on Earth as quartz.
“High levels of silica could indicate ideal conditions for preserving ancient organic material, if present, so the science team wants to take a closer look.”
Therefore the team scouted targets suitable for in depth analysis and sample drilling and chose “Buckskin”.
“Buckskin” is located among some high-silica and hydrogen enriched targets at a bright outcrop named “Lion.”
An initial test bore operation was conducted first to confirm whether that it was indeed safe to drill into “Buckskin” and cause no harm to the rover before committing to the entire operation.
The bore hole is about 1.6 cm (0.63 inch) in diameter.
“This test will drill a small hole in the rock to help determine whether it is safe to go ahead with the full hole,” elaborated Ryan Anderson, planetary scientist at the USGS Astrogeology Science Center and an MSL science team member.
So it was only after the team received back new high resolution imagery last night from the arm-mounted MAHLI camera which confirmed the success of the mini-drill operation, that the “GO” was given for a full depth drill campaign. MAHLI is short for Mars Hand Lens Imager.
“We successfully completed a mini drilling test yesterday (shown in the MAHLI image). That means that today we’re going for the FULL drill hole” Edgar confirmed.
“GO for Drilling.”
So it’s a busy day ahead on the Red Planet, including lots of imaging along the way to document and confirm that the drilling operation proceeds safely and as planned.
“First we’ll acquire MAHLI images of the intended drill site, then we’ll drill, and then we’ll acquire more MAHLI images after drilling,” Edgar explains.
“The plan also includes Navcam imaging of the workspace, and Mastcam imaging of the target and drill bit. In addition to drilling, we’re getting CheMin ready to receive sample in an upcoming plan. Fingers crossed!” Surface observations with the arm-mounted Alpha Particle X-ray Spectrometer (APXS) instrument are also planned.
If all goes well, the robot will process and pulverize the samples for eventual delivery to the onboard pair of miniaturized chemistry labs located inside her belly – SAM and CheMin. Tiny samples will be fed to the inlet ports on the rover deck through the sieved filters.
Meanwhile the team is studying a nearby rock outcrop called “Ch-paa-qn” which means “shining peak” in the native Salish language of northern Montana.”
Anderson says the target is a bright patch on a nearby outcrop. Via active and passive observations with the mast-mounted ChemCam laser and Mastcam multispectral imager, the purpose is to determine if “Ch-paa-qn” is comprised of calcium sulfate like other white veins visible nearby, or perhaps it’s something else entirely.
Before arriving by the “Lion” outcrop last week, Curiosity was investigating another outcrop area nearby, the high-silica target dubbed “Elk” with the ChemCam instrument, while scouting around the “Marias Pass” area in search of tasty science targets for in-depth analysis.
Sometimes the data subsequently returned and analyzed is so extraordinary, that the team decides on a return trip to a spot previously departed. Such was the case with “Elk” and the rover was commanded to do a U-turn to acquire more precious data.
“One never knows what to expect on Mars, but the Elk target was interesting enough to go back and investigate,” said Roger Wiens, the principal investigator of the ChemCam instrument from the Los Alamos National Laboratory in New Mexico.
Soon, ChemCam will have fired on its 1,000th target. Overall the laser blaster has been fired more than 260,000 times since Curiosity landed inside the nearly 100 mile wide Gale Crater on Mars on Aug. 6, 2012, alongside Mount Sharp.
“ChemCam acts like eyes and ears of the rover for nearby objects,” said Wiens.
“Marias Pass” is a geological context zone where two rock types overlap – pale mudstone meets darker sandstone.
The rover spotted a very curious outcrop named “Missoula.”
“We found an outcrop named Missoula where the two rock types came together, but it was quite small and close to the ground. We used the robotic arm to capture a dog’s-eye view with the MAHLI camera, getting our nose right in there,” said Ashwin Vasavada, the mission’s project scientist at NASA’s Jet Propulsion Laboratory in Pasadena, California.
White mineral veins, possibly comprised of calcium sulfate, filled the fractures by depositing the mineral from running groundwater.
“Such clues help scientists understand the possible timing of geological events,” says the team.
Read more about Curiosity in an Italian language version of this story at Alive Universe Images – here.
As of today, Sol 1060, July 31, 2015, she has taken over 255,000 amazing images.
Curiosity recently celebrated 1000 Sols of exploration on Mars on May 31, 2015 – detailed here with our Sol 1000 mosaic also featured at Astronomy Picture of the Day on June 13, 2015.
Stay tuned here for Ken’s continuing Earth and planetary science and human spaceflight news. | 0.830457 | 3.299219 |
Almost exactly one year after discovering dwarf planet Pluto's fourth moon - though not before actually naming poor little P4 - NASA announced Wednesday a fifth moon has been discovered orbiting the ex-planet.
Astronomers using the Hubble Space Telescope found the irregularly shaped moon, which they said measures 6 to 15 miles across. For now, it's being called P5.
"The moons form a series of neatly nested orbits, a bit like Russian dolls," said Mark Showalter of the SETI Institute in a statement released by the European Space Agency. Showalter is the leader of the scientific team that discovered the new moon.
The moon was detected in nine separate sets of images taken by Hubble's Wide Field Camera 3 on June 26, 27 and 29, and July 7 and 9, NASA says.
Pluto's other moons are Charon, Nix and Hydra.
The team at SETI Institute is "intrigued" that Pluto, deemed unworthy of planethood in 2006, could have "such a complex collection of satellites," the statement said.
The leading theory is that all the moons are remnants of a collision billions of years ago between Pluto and another large object from the Kuiper Belt - the region of the solar system beyond Neptune.
Because Pluto is so far away from Earth, the images of P5 look like small white dots.
But a NASA spacecraft that is on its way to Pluto will give scientists better images and details about the former planet and its neighbors.
The New Horizons spacecraft was launched in 2006, just months before Pluto was demoted by the International Astronomical Union, and is now about halfway to the icy dwarf planet. It's due to fly past in July 2015.
Hubble is on a scouting mission in support of the spacecraft, and is providing valuable information to guide its flight.
"All of this stuff poses a navigation hazard for New Horizons," said Ray Villard, news director for the Space Telescope Science Institute, which operates Hubble's science mission. "It's a messy place. You have moons and perhaps small particles."
Hubble, launched into orbit in 1990, received new instruments in 2009.
"It's at the peak of its performance," Villard said.
Hubble is expected to remain operational through the end of the decade. The James Webb Space Telescope, slated for launch in 2018, will have a larger mirror and will study Pluto after the New Horizons mission.
"Finding this moon was exciting and shows us Pluto is an intriguing and complicated place," Villard told CNN.
CNN's Phil Gast contributed to this report. | 0.812127 | 3.097949 |
Our solar system is currently being graced by the presence of a bona fide alien: 2I/ Borisov, a comet that came from another star.
Its interstellar origins are not in any real doubt; it came screaming in from deep space so quickly there's no way it started here and somehow got a slingshotted gravitational boost from a planet. Wherever it came from, it ain't local.
But where, exactly? Not too long ago I wrote about a study where scientists backtracked its path, trying to figure out if it came from a nearby star. Their best bet was Kruger 60, a binary star currently about 13 light years from Earth. About a million years ago, the comet passed them by about 5 light years, and the authors call this a plausible source for it.
I disagree. 50 trillion kilometers is a long way. If it came from, say, a few hundred billion kilometers I'd be more inclined to agree. Even the extended cloud of comets around a star (called the Oort Cloud) tends to be roughly a trillion kilometers across, not 50. So I'm not swayed.
But new research was just published pointing to a different star. They find that 910,000 years ago, Borisov passed just 0.22 light years from the star Ross 573. That's a distance of a little over 2 trillion kilometers, making it a far better candidate for the comet's original home. It's still not conclusive, but it's interesting.
The method they used is pretty cool. Remember, we're talking about this thing traveling for hundreds of thousands if not millions of years, so you not only have to trace the comet's trajectory back in space, but you also have to account for the motions of stars in that time as well! To do this they used data on over 7 million stars from the European Space Agency's Gaia mission, which I've written about many many times. It measures the brightness, positions, and most importantly the change in positions over time of well over a billion stars (yes, billion with a b). This change, called the proper motion, tells us the actual motion of a star through space, and that means it can be traced forward into the future or backward into the past.
Even that wasn't enough, though. They also did something clever: They added in the possibility of cometary outgassing. Comets are made of ices, gases frozen by the terrible cold of deep space. When they get near star that ice sublimates, turning directly into a gas, and blows out of the comet. This acts like a rocket motor, exerting a force on the comet that can change its trajectory over time. They model this using several different methods to see if they could come to a consensus about the comet's motion and potential origin.
They found 14 stars that came within a parsec (3.26 light years) of Borisov in the past. Of these, Ross 573 was the closest. It's a red dwarf, dim and cool compared to the Sun, and currently about 70 light years from Earth. They find that the extended cloud of comets surrounding the star is very roughly a light year in radius. Given that they find the comet came from the star at a quarter of a light year, this is then entirely plausible.
There is a problem here, though, and that's Borisov’s speed. If it came from Ross 573, then it was ejected at about 23 kilometers per second. That's pretty fast, and difficult to explain using the slingshot effect from a planet. Basically, if a comet swings past a planet on the right trajectory, it can steal some of the planet's energy of motion as it orbits the star, which boosts the comet’s velocity. You can get a few km/sec out of something like this, but 23 km/sec is a big ask. If Ross 573 were a binary that would help, since a star can give the comet a much bigger kick, but no other star is seen there.
The authors looked at other 13 stars, but the distance the comet gets from them is larger, making it less likely it came from there. Some are binaries, so the velocity is less of a problem but they find other problems with those systems.
As I read the paper I wondered what the odds are of a comet passing a star by 0.22 light years just randomly, and it turns out they do that calculation: You’d expect it to happen roughly once every 11 million years, give or take. Since the pass of Ross 573 happened less than a million years ago, that implies the encounter wasn’t random, implying it did come from there. Again, it’s not proof, but it supports the idea of Ross 573 being the home star.
In their modeling they also assume that the comet has not been traveling for more than 10 million years, because the accuracy and completeness of the Gaia catalogue gets pretty fuzzy looking back in time farther than that. If it has been plying interstellar space longer than that, then the encounter with Ross 573 is a coincidence, and it came from some star even farther away. If that's the case, then it's difficult to say where it came from. We may never find out.
That's a little bit aggravating, but my scientific itch is mollified by knowing that there will be more objects like this passing by. Heck, Borisov was discovered just a couple of years after 1I/’Oumuamua, the first known interstellar visitor (and still much weirder than Borisov, which looks like pretty much any ordinary solar system comet except for its speed). That strongly implies we will find lots more, especially when the Vera C. Rubin Observatory goes online in a few years. That will look at huge swaths of the sky and may find plenty more where these came from.
And maybe help us find exactly where one of these came from! That would be amazing. And given that the ESA is looking to build a mission that can try to catch up with and follow an interstellar comet for close examination, this may be the best way we have to look at another star’s denizens up close. It sounds like science fiction, but these days we're making big strides toward turning a lot of that fiction into fact. | 0.834352 | 3.892507 |
During the first week in June of this year a major planetary conjunction will occur in the constellation of Taurus, (the Bull).
On June 2nd the moon passes below, (south) of the solar position so that at noon their positions relative to the constellation lines of Taurus’ occur as depicted at right.
A conjunction of Jupiter and Saturn occurs every 20 years
A Jupiter-Saturn conjunction occurs opposite the Sun on average every 300 years.
A Jupiter / Saturn / Solar con-junction, in the constellation of Taurus, then occurs on average only once every 3600 years.
In researching the hypothesis that the "Great Ennead", mentioned in Ancient Egyptian writings, was reference to a ‘grand’ astronomical conjunction in our present age, I discovered a marked correlation between the ‘Taurus 2000 Conjunction’ and the interior design of the Great Pyramid.
From its intended eastern vantage, the cross-section of the Great Pyramid’s corridors has several points of similarity to the constellation lines of Taurus.
Moreover, the location of its chambers correlate to the position of the sun, moon and planets with the constellation lines of Taurus, at this particular time and location.
- The Sun correlates to the King’s Chamber
- Venus to its Antechamber ;
- the Moon correlates to the Queen’s Chamber ;
- Jupiter and Saturn to the northern ‘star-shafts’ ;
- Mars and Mercury to the southern ‘star-shafts’.
- The subterranean chamber is then a connective link to the constellation of Orion : (Osiris).
Did the Ancient Egyptians believe some event of great import may occur at this time? If so, what could it be?
The reason I was looking for astronomical conjunctions in the first place was that this premise flowered at the end of a very tall logic-tree. This ‘tree’ has two main ‘trunks’ (1) GDT: a proposed system of physics with characteristics differing from "modern Physics" (2) Ancient Egyptian Physics: an interpretation of Ancient Egyptian writings and geometry as a symbolic, metaphoric scientific language, in particular consonance with the principles of G-D Physics.
Highlights of GDT Logic Tree:
- G-D Physics indicates that the cores of stars are composed of dense heavy metals. The interior of the stellar core is stable, (non-reactive) while the core’s surface is a maelstrom of fission, fusion and productive reactions. (Chapter 7)
- Productive reactions (i.e., resulting in a net increase in mass) occur when an atom’s electrical field is so constrained by its surrounding gravitational / electrical fields as to be unable to release energy. Vibrational energy absorbed by the atom is then transformed into new particles of matter. (Chapter 7)
- In general, stars continually increase in mass until a critical core-fission event results in a mega-scale eruption of material. These events, the nova and supernova, culminate in the material for new stars, planets, and nebulae. Thus, beginning from a single star, the result would be a galaxy, or a universe. (Chapter 7)
- Planets with active cores are also inferred as to be mass productive; thus, the Earth’s mass, volume, and surface area have been slowly increasing for eons. Volcanic eruptions, mountain building, earthquakes and continental spreading are all causally connected to mass-increase in the core. (Chapter 8)
- Large-scale fauna in the primeval past, (dinosaurs being the premier example), were able to exist simply because the planet’s surface gravity was less than it is today. (Chapter 8)
- Core-fission events in the Earth’s pre-history have led to mass extinctions of life-forms. The primary example is the Permian Catastrophe, in which 90-95% of existent species went extinct, 250 million years ago. It is proposed this was caused by a mega-volcanic eruption in the Eastern Pacific, which melted a continent and gave birth to the Moon. (Chapter 8)
An Ancient Egyptian Science
During the development of the parameters of gravity-induced ellipsoidal space, I noticed a curiously intriguing similarity to the interior design of the Great Pyramid at Giza. Further research showed a strong congruency to the hypothesis that the Great Pyramid works out the parameters of 4-D Space as influenced by a gravity vector of 0.44c : (Chapter 9)
This might be considered as coincidental, except that a sufficient number of other geometrical correlations are found as to rule out "simple" coincidence. (Chapters 9-10) Furthermore, from the premise that the Ancient Egyptian gods, "neters", are in fact, "natures", i.e., symbolic, metaphorical, representations of the principal forces of Nature, a near-perfect congruency is found with G-D Physics. Thus, it appears that "Ancient Egyptian Physics" and G-D Physics are one and the same. Given the G-D code-key then, the principle natures in the Ancient Egyptian pantheon are readily decoded: (Chapter 11)
Atum, the neter who created the universe, while standing on a mound composed of himself, represents atoms, (which create new mass in the universe).
Atum-Ra is the Sun, as creator of the solar system.
Nun, the vacuum of outer space, is held at bay by Shu, the earth’s atmosphere.
Thoth is declared to be the "heart", or center, of Ra, while somehow simultaneously residing in the underworld, (in the center of the Earth) and in the Moon as well. Taken literally, this appears to make little sense, but when Thoth is interpreted as representing the force of gravity, a strong correlation is found.
Thoth is said to have retrieved Ra’s lost eye, (i.e., gravitational capture of planets), to have determined the shape of the Earth, (gravity shapes the earth) and to guide the course of the stars, (gravitational orbits).
Thoth has existed since the beginning of the universe; was created by the power of utterance, but was the inventor of speech. This apparent contradiction is resolved when it is realized that sound waves are gravity waves ; thus, changes in the position of mass, (vibrations) create gravity waves, which carry sound.
It appears that Egyptian mythology is multi-dimensional symbology, functioning on different levels of meaning within different contexts. This is even hinted at in verses which refer to the Doubles of the gods, (i.e., double meanings). Thus, previous interpretations, within religious, historical, or astronomical contexts, are not considered as contradicted by the scientific interpretation; but rather, should be regarded as alternate intended meanings.
Osiris, generally regarded as the most important Ancient Egyptian neter, eternally resides in the underworld on a throne of divine metal ; Osiris is inferred to represent the metallic core of the planet.
"The Story of Osiris" (Chapter 12) is a condensed geologic history of the planet, the principal characters being Isis, (the force of mass-energy creation) and Set, (the force of destruction). After Set hacks Osiris’ body into fourteen pieces, (continental plates) Isis fashions a clay (volcano) and mounting it, gives birth to Horus, (the Moon). This then refers to the Permian / Lunar event of 250 million years ago. The Pyramid Texts refer to an event of similar magnitude: not one in the distant past, but in a distressingly imminent future:
"The King ascends to the sky in an earthquake"
"The sky thunders, the earth quakes, Geb quivers, the two domains of the god roar, the earth is hacked up, and the offering is presented to me. I ascend to the sky, I cross the iron sky, I traverse the [waters of Nun]. I demolish the ramparts of the atmosphere … I ascend to the sky among the Imperishable Stars … I sit on my throne of divine metal, the faces of which are those of lions, and its feet are the hooves of the Great Wild Bull."
G-D Physics predicts such an eventual life-ending fate for the planet, but does not say when; Ancient Egyptian Physics, however, appears to be much more sophisticated, with centuries of hidden tradition underlying its science. They provide us an exact date for when this event will begin.
Bearing in mind that in a geologic time-scale the earth’s core undergoes periodic disruptive episodes, and that as it approaches the criticality of a core-fission event, the core becomes more sensitive to slight changes in the gravitational field, then, a conjunction of the major tidal gravitational influences on the planet could herald the beginning of the core-fission event. Thus, a very sophisticated science, having approximated the time-frame of criticality, could predict the exact conjunction which would initiate the event.
The "opening of the mouth" (core-fission event) with the "adze of wepawawet" (line of greatest gravitational force by external planetary bodies) describes this occurrence. The crucial grand conjunction is then described as, "the Great Ennead in the Mansion of the prince in the City of the Sun".
"… O Osiris-King, I open your mouth for you with the adze of Wepwawet, I split open your mouth for you with the adze of divine metal which split open the mouths of the [stars] … Horus has split open the mouth of this Osiris-King … with the adze of divine metal which split open the mouths of the [stars]. The King’s mouth is split open with it, and he goes and himself speaks with the Great Ennead in the Mansion of the Prince which is in On …"
If I have correctly read all the branches on the deductive inductive logic tree leading to this conclusion correctly, then there are two principle Taurus 2000 scenarios possible:
- they were predicting a geological event of significant magnitude will occur simultaneous with this conjunction; thus, Taurus 2000 will be a critical warning that things are not right in this planet.
- they were predicting Taurus 2000 will be the beginning phase of an E.L.E., on the level of the Permian Catastrophe.
Also: texts referring to a "lesser Ennead" (which precedes the Great Ennead) may then be referring to a similar conjunction occurring on 5 May 2000. If so, then, it can also be considered as part of the warning.
In accordance with this hypothesis, I will make three predictions:
- one or more major geological events occurring on or near the date of May 5, 2000;
- one or more major geological events occurring on or near the date of June 2, 2000;
- at least one of these events will occur in the Middle East.
If these predictions prove accurate, then, the entire preceding logic tree is supported; thus, this becomes a systems-check for both GDT and Ancient Egyptian Physics.
It will also definitely be time to consider the following hypothesis: the deduced Giza methodology of preventing the "opening of the mouth". (Chapter 13) | 0.893581 | 3.636016 |
Astronomers discover new distant dwarf planet beyond Neptune
An international team of astronomers have discovered a new dwarf planet orbiting in the disk of small icy worlds beyond Neptune. The new object is roughly 700 kilometers in size and has one of the largest orbits for a dwarf planet. Designated 2015 RR245 by the International Astronomical Union's Minor Planet Center, it was found using the Canada-France-Hawaii Telescope on Maunakea, Hawaii, as part of the ongoing Outer solar system Origins Survey (OSSOS).
"The icy worlds beyond Neptune trace how the giant planets formed and then moved out from the sun. They let us piece together the history of our solar system. But almost all of these icy worlds are painfully small and faint: It's really exciting to find one that's large and bright enough that we can study it in detail," said Dr Michele Bannister of the University of Victoria in British Columbia, who is a postdoctoral fellow with the survey.
National Research Council of Canada's Dr JJ Kavelaars first sighted RR245 in February 2016 in the OSSOS images from September 2015."There it was on the screen— this dot of light moving so slowly that it had to be at least twice as far as Neptune from the sun." said Bannister.
The team became even more excited when they realized that the object's orbit takes it more than 120 times further from the sun than Earth. The size of RR245 is not yet exactly known, as its surface properties need further measurement. "It's either small and shiny, or large and dull," said Bannister.
The vast majority of the dwarf planets like RR245 were destroyed or thrown from the solar system in the chaos that ensued as the giant planets moved out to their present positions: RR245 is one of the few dwarf planets that has survived to the present day—along with Pluto and Eris, the largest known dwarf planets. RR245 now circles the sun among the remnant population of tens of thousands of much smaller, mostly unobserved trans-Neptunian worlds.
Worlds that journey far from the sun have exotic geology with landscapes made of many different frozen materials, as the recent flyby of Pluto by the New Horizons spacecraft showed.
Further than 12 billion km (80 AU) from the sun, RR245 is traveling toward its closest approach at 5 billion km (34 AU), which it will reach around 2096. RR245 has been on its highly elliptical orbit for at least the last 100 million years.
As RR245 has only been observed for one of the 700 years it takes to orbit the sun, how its orbit will evolve in the far future is still unknown; its precise orbit will be refined over the coming years, after which RR245 will be given a name. As discoverers, the OSSOS team can submit their preferred name for RR245 to the International Astronomical Union for consideration.
"OSSOS was designed to map the orbital structure of the outer solar system to decipher its history," said Prof. Brett Gladman of the University of British Columbia in Vancouver. "While not designed to efficiently detect dwarf planets, we're delighted to have found one on such an interesting orbit."
RR245 is the largest discovery and the only dwarf planet found by OSSOS, which has discovered more than 500 new trans-Neptunian objects. "OSSOS is only possible due to the exceptional observing capabilities of the Canada-France-Hawaii Telescope. CFHT is located at one of the best optical observing locations on Earth, is equipped with an enormous wide-field imager, and can quickly adapt its observing each night to new discoveries we make. This facility is truly world-leading," said Gladman.
Previous surveys have mapped almost all the brighter dwarf planets. 2015 RR245 may be one of the last large worlds beyond Neptune to be found until larger telescopes, such as LSST, come online in the mid 2020s. | 0.854641 | 3.881424 |
We are delighted to have been further involved with the Dark Energy Survey (DES), this time making the cast Invar parts.
Having previously worked with developing the Dark Energy Camera for the DES, creating the ring that holds the lens in place, the team knew they could rely on us to deliver the highly accurate work needed.
We cast the parts using Invar, due to its high level of durability and stability.
Unlike other materials, this nickel/iron composition is able to maintain its shape between temperatures of -100°C & 260°C, making it ideal for a range of applications where accuracy is key, including clock pendulums, measuring devices, aerospace engineering, microscopes and telescopes.
In fact, one of Invar’s original uses was in clock pendulums. When it was first invented, the pendulum clock was the world’s most precise way of telling the time, however accuracy was limited due to the possible thermal variations in pendulums. In 1839, Clemens Riefler invented the first clock to use an Invar pendulum and its unprecedented accuracy (10 milliseconds per day) meant it served as the primary time standard for national time services until the 1930s!
What is the Dark Energy Survey?
Completed this year, the Dark Energy Survey (DES) was an ambitious international project which scanned approximately one quarter of the Southern skies (a 5,000 square degree area) in depth, mapping hundreds of galaxies in an attempt to understand ‘dark energy’.
A specialist 520-megapixel camera mounted on a four meter telescope at the National Science Foundation’s Cerro Tololo Inter-American Observatory in Chile recorded data from more than 300 million galaxies over a six year period from 2013 – 2019.
Now scientists are going about the huge task of analysing the vast amounts of data to learn more about never-before-seen distant galaxies.
So far, DES has already released exciting scientific results, including the most precise measurement of dark matter structure in the universe and new discoveries such as dwarf satellite galaxies of the Milky Way and the most distant supernova ever detected. | 0.847889 | 3.115858 |
According to a new study, researchers believe that Earth-sized planets can support life at least ten times farther away from their host stars than previously thought.
Researchers from the University of Aberdeen and the University of St Andrews recently published a paper asserting that cold rocky planets previously considered inhospitable to life as we know it may actually be habitable beneath the surface. Phys.org explains that the team of researchers “created a computer model that estimates the temperature below the surface of a planet of a given size, at a given distance from its star.”
The abstract for the paper explains:
The habitable zone (HZ) is conventionally the thin shell of space around a star within which liquid water is thermally stable on the surface of an Earth-like planet (Kasting et al., 1993). However, life on Earth is not restricted to the surface and includes a “deep biosphere” reaching several km in depth. Similarly, subsurface liquid water maintained by internal planetary heat could potentially support life well outside conventional HZs. We introduce a new term,subsurface-habitability zone (SSHZ) to denote the range of distances from a star within which rocky planets are habitable at any depth below their surfaces up to a stipulated maximum, and show how SSHZs can be estimated from a model relating temperature, depth and orbital distance. We present results for Earth-like, Mars-like and selected extrasolar terrestrial planets, and conclude that SSHZs are several times wider and include many more planets than conventional surface-based habitable zones.
PhD student Sean McMahon, one of the team’s members, explains that “The deepest known life on Earth is 5.3 km below the surface, but there may well be life even 10 km deep in places on Earth that haven’t yet been drilled.” He continues, “Using our computer model we discovered that the habitable zone for an Earth-like planet orbiting a sun-like star is about three times bigger if we include the top five kilometres below the planet surface . . . If we go deeper, and consider the top 10 km below the Earth’s surface, then the habitable zone for an Earth-like planet is 14 times wider.”
According to Phys.org, our solar system’s current habitable zone extends out as far as Mars, but taking subsurface habitability into account, “this re-drawn habitable zone would see the zone extend out further than Jupiter and Saturn.”
Hoping to encourage other researchers to expand their views on where life may exist, McMahon states, “The results suggest life may occur much more commonly deep within planets and moons than on their surfaces. This means it might be worth looking for signs of life outside conventional habitable zones. I hope people will study the ways in which life below the surface might reveal itself. Because it’s not unimaginable that there might be signs at the surface that life exists deep below.”
This is not the first time McMahon and his colleagues have proposed an expanded habitable zone. The team presented a new habitability model back in 2012, which took into account a planet’s potential for underground water, and, therefore life.
The team’s recent paper was published in the journal Planetary and Space Science.
- Sean Cahill, Navy UFO Encounter and Guadalupe Island Investigation – July 16, 2019
- Nick Pope – Recent Government UFO Announcements – June 18, 2019
- Dr. Michael Masters – UFOs Could Be Us From the Future – October 29, 2019
- Luis Elizondo, Former Head of the Pentagon’s UFO Project
- Lee Speigel, UFO News and Updates – August 21, 2019
- Heather Taddy and Chuck Zukowski Discuss Their New Show Alien Highway
- Congressman Seeks Answers Regarding Department of Defense UFO Efforts
- Bryan Bender, Politico’s Defense Editor Talks UFOs
- Luis Elizondo Interview Transcript
- Alan Stivelman – Director of Witness of Another World – October 22, 2019 | 0.839223 | 3.711764 |
First a quick review of the “Firsts” that NASA has reported. Planet Hunter found the first “Earth-size Habitable Zone World.” Their TESS mission found the first “World With Two Stars,” and their Hubble
found “Water Vapor on an Exoplanet in a Habitable Zone.” Keep a watch on TOI 700d, TOI 1338 b and K2-a8b.
TOI 700d is in its sun’s habitable zone, 20% larger than Earth and tidally locked to its star. It is probably rocky. TOI 1338a orbits two stars and is 6.9 times larger than Earth with a wobbly orbit around its sun, which is 10% larger than Earth’s.K2-18b is in its star’s habitable zone, but it may have a nasty radiation environments. It is huge (8 times the size of Earth) and may not be “terrestrial” like Earth. Hopefully the James Webb Space Telescope will be able to decide if its atmosphere includes nitrogen and methane.
The problem for anyone hoping to move to any one of the three “firsts” is that they are all 100 and more light-years from Earth. Again, it looks like we’d better stay here on lucky planet Earth.
The more we look out, enjoying our life-supporting sun with its gorgeous emergence every morning (somewhere on Earth–see photo attached), the less we see of others. Most exoplanets found so far are too dry, too
hot, too close, too far, too big, too small or too something else. It makes me wonder how alone and how lucky Earth was to keep its life-supporting water so long. The list of happy accidents that supported life here have been beautifully detailed by experts
Of course, water and life won’t last forever here. The sun will expand, and engulf Mercury as a red giant, then Venus. Maybe Jupiter and its Europa will then be able to house life, or maybe Saturn’s Enceladus. Earth and Mars will be too hot in a billion years.
The universe is very large. There are an astounding number of galaxies beyond ours and many suns in our huge galaxy,. So far they seem to be very different, with variable planets, most too hot, too close to their sun, or too dry to host life. So far, apparently, there is nothing very close to Earth that could house and feed life comfortably.
Mars may have had water for its first millions of years, but our sun eventually dried it out, and its magnetic field went away. Venus was once wetter and more temperate than it is now, until the sun grew hotter, according to Nova’s “The Planet’s Inner Worlds” KQED, circa January 2020. Magellan’s radar has seen evidence of volcanism there and a runaway greenhouse.
The problem for life elsewhere in our solar system has been too much heat to keep water handy for life. Runaway greenhouse effects are tough on life, which requires water for anything similar to life as we know it.
In James Trefil and Michael Simmes book “Imagined Life” the authors point out that our physics is reliable. Careful selection probably drives evolution, but not even a scan is required if there are hydrothermal vents driven by heat from radioactive decay in an exoplanet’s core. Of course, any life might be very different from Earth’s–given such physical challenges.
If we knew just exactly where Earth’s water came from originally, we might have a better idea of what could be happening elsewhere. Did asteroids bring water here? Or did water come after Earth cooled?
Probably from comets? Heavy water content suggests it came late. Does Earth’s mantle hold a lot of water? Check out November 2019 Discover.
Author of The Archives of Varok
The View Beyond Earth (Book 1.)
The Webs of Varok (Book 2.)
Nautilus Silver Award 2013 YA
ForeWord IBPA finalist 2012 adult SF
The Alien Effect (Book 3.)
An Alien’s Quest (Book 4.)
Excerpts, Synopses, Reviews, On Writing, Characters and More-
Reviews of significant books- www.goodreads.com/Cary_Neeper
How the Hen House Turns- www.ladailypost.com
Complexity, Bio, Bibliography and Links- caryneeper.com
Astrobiology- astronaut.com search:Who’s Out There | 0.907365 | 3.459466 |
Astronomers find possible elusive star behind supernova
Maunakea, Hawaii – Astronomers may have finally uncovered the long-sought progenitor to a specific type of exploding star by sifting through NASA Hubble Space Telescope archival data and conducting follow-up observations using W. M. Keck Observatory in Hawaii.
The supernova, known as a type Ic, is thought to detonate after a massive star has shed or been stripped of its outer layers of hydrogen and helium.
These stars are among the most massive known — at least 30 times more massive than our own Sun. Even after shedding some of their material late in life, they remain very large and bright.
So it was a mystery as to why astronomers had not been able to nab one of these stars in pre-explosion images.
Finally, in 2017, astronomers got lucky. A nearby star ended its life as a type Ic supernova. Two teams of astronomers pored through the archive of Hubble images to uncover the presumed precursor star in pre-explosion photos taken in 2007. The supernova, catalogued as SN 2017ein, appeared near the center of the nearby spiral galaxy NGC 3938, located roughly 65 million light-years away.
This discovery could yield important insights into stellar evolution, including how the masses of stars are distributed when they are born in batches.
"Finding a bona fide progenitor of a supernova Ic is a big prize of progenitor searching," said Schuyler Van Dyk of the California Institute of Technology (Caltech) in Pasadena, lead researcher of one of the teams. "We now have for the first time a clearly detected candidate object."
His team's paper was published in June in The Astrophysical Journal.
A second team led by Charles Kilpatrick of the University of California, Santa Cruz, also observed the supernova in June 2017 in infrared images, which were captured using Keck Observatory's powerful adaptive optics system combined with its OH-Suppressing Infrared Imaging Spectrograph (OSIRIS). Kilpatrick's team then analyzed the same archival Hubble photos as Van Dyk's team to uncover the possible source. An analysis of the object's colors shows that it is blue and extremely hot.
"This supernova occurred in a crowded part of its host galaxy. When we looked at a pre-explosion Hubble Space Telescope image, the stars appeared closely packed together," said Kilpatrick. "This discovery was only made possible because we were able to use Keck Observatory to pinpoint the location of the supernova in its host galaxy. The extremely high-resolution image from Keck allowed us to determine with a high degree of precision exactly where the explosion occurred. This location happened to land right on top of a single, very blue, and luminous object in the pre-explosion Hubble image."
The results from Kilpatrick's team, which appeared in the Oct. 21, 2018, issue of the Monthly Notices of the Royal Astronomical Society, is consistent with the earlier team's conclusions.
"We were fortunate that the supernova was nearby and very bright, about 5 to 10 times brighter than other type Ic supernovas, which may have made the progenitor easier to find," said Kilpatrick. "Astronomers have observed many type Ic supernovas, but they are all too far away for Hubble to resolve. You need one of these massive, bright stars in a nearby galaxy to go off. It looks like most type Ic supernovas are less massive and therefore less bright, and that's the reason we haven't been able to find them."
Because the object is blue and exceptionally hot, both teams suggest two possibilities for the source's identity. The progenitor could be a single hefty star between 45 and 55 times more massive than our Sun.
Another idea is that it could have been a massive binary-star system in which one of the stars weighs between 60 and 80 solar masses and the other roughly 48 suns. In this latter scenario, the stars are orbiting closely and interact with each other. The more massive star is stripped of its hydrogen and helium layers by the close companion, and eventually explodes as a supernova.
The possibility of a massive double-star system is a surprise. "This is not what we would expect from current models, which call for lower-mass interacting binary progenitor systems," Van Dyk said.
Expectations on the identity of the progenitors of type Ic supernovas have been a puzzle. Astronomers have known that the supernovas were deficient in hydrogen and helium, and initially proposed that some hefty stars shed this material in a strong wind (a stream of charged particles) before they exploded.
When they didn't find the progenitors stars, which should have been extremely massive and bright, they suggested a second method to produce the exploding stars that involves a pair of close-orbiting, lower-mass binary stars. In this scenario, the heftier star is stripped of its hydrogen and helium by its companion. But the "stripped" star is still massive enough to eventually explode as a type Ic supernova.
"Disentangling these two scenarios for producing type Ic supernovas impacts our understanding of stellar evolution and star formation, including how the masses of stars are distributed when they are born, and how many stars form in interacting binary systems," explained Ori Fox of the Space Telescope Science Institute (STScI) in Baltimore, Maryland, a member of Van Dyk's team. "And those are questions that not just astronomers studying supernovas want to know, but all astronomers are after."
Type Ic supernovas are just one class of exploding star. They account for 21 percent of massive stars that explode from the collapse of their cores.
The teams caution that they won't be able to confirm the source's identity until the supernova fades in about two years. The astronomers hope to use either Hubble or the upcoming NASA James Webb Space Telescope to see whether the candidate progenitor star has disappeared or has significantly dimmed. They also will be able to separate the supernova's light from that of stars in its environment to calculate a more accurate measurement of the object's brightness and mass.
ABOUT ADAPTIVE OPTICS
W. M. Keck Observatory is a distinguished leader in the field of adaptive optics (AO), a breakthrough technology that removes the distortions caused by the turbulence in the Earth's atmosphere. Keck Observatory pioneered the astronomical use of both natural guide star (NGS) and laser guide star adaptive optics (LGS AO) on large telescopes and current systems now deliver images three to four times sharper than the Hubble Space Telescope. Keck AO has imaged the four massive planets orbiting the star HR8799, measured the mass of the giant black hole at the center of our Milky Way Galaxy, discovered new supernovae in distant galaxies, and identified the specific stars that were their progenitors. Support for this technology was generously provided by the Bob and Renee Parsons Foundation, Change Happens Foundation, Gordon and Betty Moore Foundation, Mt. Cuba Astronomical Foundation, NASA, NSF, and W. M. Keck Foundation.
The OH-Suppressing Infrared Imaging Spectrograph (OSIRIS) is one of W. M. Keck Observatory's "integral field spectrographs." The instrument works behind the adaptive optics system, and uses an array of lenslets to sample a small rectangular patch of the sky at resolutions approaching the diffraction limit of the 10-meter Keck Telescope. OSIRIS records an infrared spectrum at each point within the patch in a single exposure, greatly enhancing its efficiency and precision when observing small objects such as distant galaxies. It is used to characterize the dynamics and composition of early stages of galaxy formation.
ABOUT W. M. KECK OBSERVATORY
The W. M. Keck Observatory telescopes are among the most scientifically productive on Earth. The two, 10-meter optical/infrared telescopes atop Maunakea on the Island of Hawaii feature a suite of advanced instruments including imagers, multi-object spectrographs, high-resolution spectrographs, integral-field spectrometers, and world-leading laser guide star adaptive optics systems. The data presented herein were obtained at Keck Observatory, which is a private 501(c) 3 non-profit organization operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the Native Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
ABOUT HUBBLE SPACE TELESCOPE
The Hubble Space Telescope is a project of international cooperation between NASA and ESA (European Space Agency). NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy in Washington, D.C. | 0.86387 | 3.822946 |
Before our Global Positioning System (GPS) navigation devices can tell us where we are, the satellites that make up the GPS need to know exactly where they are. For that, they rely on a network of sites that serve as “you are here” signs planted throughout the world. The catch is, the sites don’t sit still because they’re on a planet that isn’t at rest, yet modern measurements require more and more accuracy in pinpointing where “here” is.
To meet this need, NASA is helping to lead an international effort to upgrade the four systems that supply this crucial location information. NASA’s initiative is run by Goddard Space Flight Center in Greenbelt, Md., where the next generation of two of these systems is being developed and built. And Goddard, in partnership with NASA’s Jet Propulsion Laboratory in Pasadena, Calif., is bringing all four systems together in a state-of-the-art ground station.
“NASA and its sister agencies around the world are making major investments in new stations or upgrading existing stations to provide a network that will benefit the global community for years to come,” says John LaBrecque, Earth Surface and Interior Program Officer at NASA Headquarters.
GPS won’t be the only beneficiary of the improvements. All observations of Earth from space—whether it’s to measure how far earthquakes shift the land, map the world’s ice sheets, watch the global mean sea level creep up or monitor the devastating reach of droughts and floods—depend the International Terrestrial Reference Frame, which is determined by data from this network of designated sites.
Earth is a shapeshifter. Land rises and sinks. The continents move. The balance of the atmosphere shifts over time, and so does the balance of the oceans. All of this tweaks Earth’s shape, orientation in space and center of mass, the point deep inside the planet that everything rotates around. The changes show up in Earth’s gravity field and literally slow down or speed up the planet’s rotation.
“In practical terms, we can’t determine a location today and expect it to be good enough tomorrow—and especially not next year,” says Herbert Frey, the head of the Planetary Geodynamics Laboratory at Goddard and a member of the Space Geodesy Project team.
Measuring such properties of Earth is the realm of geodesy, a time-honored science that dates back to the Greek scholar Eratosthenes, who achieved a surprisingly accurate estimate of the distance around the Earth by using basic geometry.
Around 240 BC, Eratosthenes found that when the sun sat directly above the Nile River town of Syene, it shone an angle of 7.2 degrees (1/50 of a circle) in the northern city of Alexandria. Reasoning that the distance from Alexandria to Syene was 1/50 of the way around the globe, he came up with a circumference of roughly 25,000 miles for Earth, quite close to the modern measurement of 24,902 miles.
“Even with the sophisticated tools we have now, geodesy is still all about geometry,” says Frank Lemoine, a Goddard geophysicist on the project.
As in ancient Greece, geodesy today is a team sport, relying on observations conducted in multiple places. Over the years, four types of space geodesy measurements, carried out by a squad of ground stations and satellites, developed independently. Together, they tell the story of what’s happening on Earth and keep track of the Terrestrial Reference Frame.
“While there is some overlap in what can be gleaned from each geodetic technique, they also provide different forms of information based on how they operate,” says Jet Propulsion Laboratory’s David Stowers, who manages the hardware and data flow for the GPS portion of the space geodesy initiative. “GPS was designed specifically as a positioning system and is fairly ubiquitous, thus providing data strength in numbers. It is unique in providing a physical point of reference for the Terrestrial Reference Frame, with the position of the GPS antenna as the primary type of data.”
Another technique, Very Long Baseline Interferometry (VLBI), acts as a kind of GPS for Earth. To deduce Earth’s orientation in space, and the small variations in the Earth’s rate of rotation, ground stations spread across the globe observe dozens of quasars, which are distant enough to be stable reference points.
“VLBI is the one technique that connects measurements made on Earth to the celestial reference frame—that is, the rest of the universe,” says Stephen Merkowitz, who is the project manager for NASA’s space geodesy initiative.
The key is the painstakingly accurate timing of when the quasar signals arrive. “With this information, we can determine the geometry of the stations that made the observations,” says Chopo Ma, head of the VLBI program at Goddard.
By knowing the geometry, researchers aim to measure the distances between the ground stations down to the millimeter, or about the thickness of a penny.
Keeping tabs on Earth’s center of mass is the job of satellite laser ranging (SLR). It measures the distances to orbiting satellites by shooting short pulses of laser light at satellites and measuring the time it takes for the light to complete the round trip back to the ground station.
“SLR tells us where the center of mass is, because satellites always orbit around the planet’s center of mass,” says Lemoine.
Another way to measure distances to satellites is with DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite), which was built and is operated by the French space agency, known as CNES.
DORIS takes advantage of the Doppler effect, which is at work when an ambulance’s siren changes pitch as it’s driving toward or away from you. The same effect retunes the frequency of a radio signal emitted by a DORIS beacon as the signal travels from the ground into space and is received by a satellite orbiting the Earth. By measuring the frequency change, researchers can work backward to figure out the distance from the beacon to the satellite that picked it up.
Like GPS, DORIS requires little hardware on the ground, so its beacons are spread all over the globe, even in areas as remote as the Mount Everest base camp.
All ground stations were not created equal. Some sites are home to one technique, others to two or three, and the sophistication of the techniques can vary from station to station. Right now, only Goddard and the station in Johannesburg, South Africa, are providing results from all four. NASA wants to change that.
“The plan for the upgraded system is to have at least three, and preferably all four, techniques at every station,” says LaBrecque. “This is one of the keys to achieving the goals of a millimeter of accuracy and a tenth of a millimeter of stability for future measurements.”
At the Goddard Geophysical and Astronomical Observatory (GGAO) in Greenbelt, Md., where the state-of-the-art prototype station is being developed, a new VLBI antenna was just installed. Capable of moving faster than its predecessors, the antenna will complete more observations during a run. It’s the first piece of a completely revamped VLBI system that will be more sensitive yet less prone to interference from things like cell phones.
The Next Generation SLR is also being developed at GGAO, under the direction of Jan McGarry, with the goals of more automated operation and the ability to target satellites in higher orbits. Already operational for selected tasks, the system has been ranging to NASA’s Lunar Reconnaissance Orbiter since June 2009.
Another key innovation at Goddard’s new station is the “vector tie” system that will link together all four measurement techniques. “Right now, we have these four independent techniques, and they’re just that: independent,” Lemoine says. “Presently, at a particular ground station, the techniques are only tied together by expensive and infrequently performed ground surveys.”
But with the vector tie system, which will use a laser to continuously monitor the reference points of each technique, researchers will know exactly where a station’s GPS, VLBI, SLR and DORIS sit relative to each other at all times, allowing them to better correct one of the last sources of error in the terrestrial reference frame.
Between 25 and 40 upgraded stations would need to be deployed worldwide the complete the new network. The importance of this investment is detailed in the report “Precise Geodetic Infrastructure: National Requirements for a Shared Resource” by the National Research Council of the National Academies in Washington, D.C.
Agencies around the world, including Germany’s Bundesamt für Kartographie und Geodäsie, France’s Institut Géographique National, the Geographical Survey Institute of Japan, and Geoscience Australia, would build the stations. Together, these groups would choose the best locations, and the work would be done in cooperation with the Global Geodetic Observing System, a scientific organization that helps maintain the terrestrial reference frame.
“By bringing at least three of the four techniques together in each station, we will get a stronger system overall,” says Frey. “NASA is leading the way in this, building a prototype station that will go beyond our current scientific requirements and serve the satellites of the future.” | 0.842245 | 3.321441 |
Scientists are taking a new approach in the search for extraterrestrial intelligence, studying exoplanet atmospheres for signs of pollution.
Cambridge, Massachusetts – Humanity is on the threshold of being able to detect signs of alien life on other worlds. By studying exoplanet atmospheres, we can look for gases like oxygen and methane that only coexist if replenished by life. But those gases come from simple life forms like microbes. What about advanced civilizations? Would they leave any detectable signs?
They might, if they spew industrial pollution into the atmosphere. New research by theorists at the Harvard-Smithsonian Center for Astrophysics (CfA) shows that we could spot the fingerprints of certain pollutants under ideal conditions. This would offer a new approach in the search for extraterrestrial intelligence (SETI).
“We consider industrial pollution as a sign of intelligent life, but perhaps civilizations more advanced than us, with their own SETI programs, will consider pollution as a sign of unintelligent life since it’s not smart to contaminate your own air,” says Harvard student and lead author Henry Lin.
“People often refer to ETs as ‘little green men,’ but the ETs detectable by this method should not be labeled ‘green’ since they are environmentally unfriendly,” adds Harvard co-author Avi Loeb.
The team, which also includes Smithsonian scientist Gonzalo Gonzalez Abad, finds that the upcoming James Webb Space Telescope (JWST) should be able to detect two kinds of chlorofluorocarbons (CFCs) — ozone-destroying chemicals used in solvents and aerosols. They calculated that JWST could tease out the signal of CFCs if atmospheric levels were 10 times those on Earth. A particularly advanced civilization might intentionally pollute the atmosphere to high levels and globally warm a planet that is otherwise too cold for life.
There is one big caveat to this work. JWST can only detect pollutants on an Earth-like planet circling a white dwarf star, which is what remains when a star like our Sun dies. That scenario would maximize the atmospheric signal. Finding pollution on an Earth-like planet orbiting a Sun-like star would require an instrument beyond JWST — a next-next-generation telescope.
The team notes that a white dwarf might be a better place to look for life than previously thought, since recent observations found planets in similar environments. Those planets could have survived the bloating of a dying star during its red giant phase, or have formed from the material shed during the star’s death throes.
While searching for CFCs could ferret out an existing alien civilization, it also could detect the remnants of a civilization that annihilated itself. Some pollutants last for 50,000 years in Earth’s atmosphere while others last only 10 years. Detecting molecules from the long-lived category but none in the short-lived category would show that the sources are gone.
“In that case, we could speculate that the aliens wised up and cleaned up their act. Or in a darker scenario, it would serve as a warning sign of the dangers of not being good stewards of our own planet,” says Loeb.
This work has been accepted for publication in The Astrophysical Journal.
Headquartered in Cambridge, Massachusetts, the Harvard-Smithsonian Center for Astrophysics (CfA) is a joint collaboration between the Smithsonian Astrophysical Observatory and the Harvard College Observatory. CfA scientists, organized into six research divisions, study the origin, evolution and ultimate fate of the universe.
Publication: Accepted for publication in The Astrophysical Journal
PDF Copy of the Study: Detecting industrial pollution in the atmospheres of earth-like exoplanets
Image: Christine Pulliam (CfA) | 0.810206 | 3.379018 |
Over the summer of 2015, Dr. Nathan De Lee and three undergraduate research students: Johnathan Wilson, Shandon Stamper, and Neil Russel worked on calibrating the 11-inch telescope in preparation for the opening of the NKU Schneider Observatory. This project, funded by a UR-STEM grant, used a SBIG ST-7e CCD camera attached to an 11-inch Celestron NexStar GPS telescope to take images of both M57 (the Ring Nebula) and M71. The primary goals of this project were to quantify the depth to which we can get good signal-to-noise (S/N) on our stellar targets and to see how well the 11-inch telescope could track an object over the course of the night. The analysis led by Mr. Wilson found that we could get S/N of 100 on our stellar targets down to 15th magnitude (this about 4,000 times dimmer than the human eye can see at a dark location) in 80 second exposures. We quickly found, however, that we could not do longer exposures than approximately 80 seconds due to the tracking ability of the telescope.
Mr. Stamper led the analysis of the telescope tracking. The 11-inch telescope use a computer driven drive to track stars as they rise and set in the sky. As can be seen in Figure 1, when the exposure lengths get longer the telescope starts to fall behind the motion of the stars on the sky. This causes the stars to become elongated. Beyond 80 seconds, the elongation is significant enough that it becomes hard to separate nearby stars. For projects that require minimal distortions, even short exposures will be necessary. Now that the 11-inch telescope is installed at the NKU Schneider Observatory, we will continue work on improving the tracking of the telescope. There are several routes we can use including using a guide star, stacking images, and choosing stars away from the meridian. An example of combining several short exposures of the Ring Nebula shown in Figure 2. Both analyses were presented at the Heather Bullen Summer Research Celebration, and we are in the process of purchasing a new camera for the 11-inch telescope. | 0.801654 | 3.202703 |
When the next Mars Exploration Mission is launched in 2020, UNSW Professor Martin Van Kranendonk will have the satisfaction of knowing he helped select the site where the expedition’s robotic rover will touch down.
Professor Van Kranendonk, director of the Australian Astrobiology Centre in the School of Biological, Earth and Environmental Sciences, was the only Australian scientist to attend a recent NASA workshop in the US where participants voted to choose the top eight possible landing sites for the expedition to the red planet.
“It was one of the best experiences of my career,” says Professor Van Kranendonk, who will give a talk next week at UNSW about the event.
“More than 200 people ranging from NASA scientists and engineers to planetary scientists and graduate students gathered in Los Angeles to debate which of 30 sites offered the best chance of finding evidence of past life on Mars.
“The whole process was a model of democracy. The votes were tallied by raised hands and every person’s vote counted equally. It was absolutely transparent and quite exciting. During the voting you got a feeling right away whether a site was going to be in or out.”
Each site was judged on how it met five mission objectives, including the geological history of the site as assessed from instruments on board orbiting spacecraft or, in one case, from previous rover investigations.
Other criteria were whether a site was likely to have harboured ancient life, and if it contained a good suite of rock samples that could be collected and stored on the surface for possible future transport to Earth, where more detailed analyses could be conducted.
“There are not many geologists involved in planetary science, so I felt I had an important role to play, and was able to make some valuable comments,” says Professor Van Kranendonk, who studies some of the earliest evidence of life on Earth in 3.5 billion year old rocks in the Pilbara region of Western Australia.
“It was also exciting that my top two preferred landing sites for the Mars mission were the top two chosen by the room. It gives me hope that the mission will discover evidence of past life there. ”
The top ranked site is Jezero Crater, a 45-kilometre wide basin that was once home to an ancient Martian lake in which life may have developed.
The second site was at Gustav Crater, now called Columbia Hills, where the previous rover Spirit uncovered some siliceous rocks deposited from a hotspring, including very unusual opaline silica nodules with finger-like protrusions that have a direct analogue on Earth from the Atacama Desert in Chile, formed in the presence of microbes.
The final decision on a site is expected to be made by NASA within the next three years.
Professor Van Kranendonk will give a talk about his experiences on Wednesday 16 September at 12 noon, Biosciences Building D26, Roundtree Room 356. | 0.864376 | 3.208039 |
Pluto’s largest moon, Charon, is almost the same size as the dwarf planet itself is, and has a red-colored patch on its North pole that’s around 753 miles in diameter, or about the size of Texas.
Image Credit: NASA/JHUAPL/SwRI
The origin of this red spot has been a mystery, but data from the Pluto fly-by by NASA’s New Horizons spacecraft in 2015 helped scientists gather information about the dwarf planet and its natural satellites.
A study carried out by planetary scientist and New Horizons team member Will Gundy, and his colleagues, reveals more facts about where the red spot may have come from, and is published in Nature.
The red spot on Charon is likely because of Pluto itself. Because the dwarf planet is so small, it has a hard time holding onto its atmospheric gasses due to the lack of gravity. As these gasses escape the dwarf planet’s atmosphere, they waft into space, but the gravity from Charon helps scoop some of that gas into Charon’s limited atmosphere as well.
Both methane and nitrogen gasses would be directly involved in the transfer, and as they were picked up by Charon, they would freeze and accumulate on Charon’s surface during the moon’s winter, which would allow them to “stick” to the moon’s surface.
The methane molecules bounce around on Charon's surface until they either escape back into space or land on the cold pole, where they freeze solid, forming a thin coating of methane ice that lasts until sunlight comes back in the spring,” Grundy said in a statement
As soon as they got there, radiation from sources like the Sun would slowly work at the frozen methane and Nitrogen, converting it into sticky substances known as tholins. These are basically molecules that can be formed when ultraviolet light strikes certain simple organic compounds, such as the methane gas coming from Pluto.
Those tholins are reportedly what cause the red spot on Charon. Charon has long winters that last well longer than 100 Earth years, and during that time, temperatures there can get as low as -459º Fahrenheit, which is long enough and low enough to allow those gasses to freeze.
Just like the Earth, the North and South poles take turns being the winter pole, which has led scientists to believe that the same process might also happen on the South pole on Charon.
Unfortunately, images from the New Horizons spacecraft can’t make out the South pole since it’s hidden in darkness from the lack of light. The North pole was the only visible pole in the photographs, so we don’t really know if the South pole has a giant red spot too.
Source: NASA via Space.com | 0.853519 | 3.795403 |
In 1895, the astronomer Simon Newcomb published his "Tables of the Sun," based on observations of the sun's position from 1750 to 1892. (http://en.wikipedia.org/wiki/Newcomb%27s_Tables_of_the_Sun) These calculations turned out to be reliable enough that astronomers continued to use them until the 1980s.
Before 1956, the second was defined as the mean solar second, or in other words, 1/86,400 of the time the earth takes to spin around on its own axis and see the sun again each day. But because the moon's gravity and the tides are slowing down the earth's spin, this is not a stable quantity.
From 1950-1956, the international authorities agreed to redefine the second to be the "ephemeris second," based on the speed of the earth's orbit around the sun in 1900, as predicted in Newcomb's tables. The earth's orbit around the sun is not slowing down, at least not on anything like the effect on the earth's spin around its own axis. (In practice the ephemeris second is measured by looking at the moon's orbit around the earth and taking pictures of what stars the moon is near.)
Because Newcomb's tables cover observations from 1750 to 1892, the "ephemeris second" corresponds to the mean solar second at the middle of this period, or about 1820. (http://tycho.usno.navy.mil/leapsec.html)
Meanwhile, from 1952 to 1958, astronomers from the U.S. Navy and the British National Physical Laboratory measured the frequency of cesium oscillations in terms of the ephemeris second. (http://www.leapsecond.com/history/1958-PhysRev-v1-n3-Markowitz-Hall-Essen-Parry.pdf) Cesium is even more stable than the orbit of the earth around the sun.
There are a few ways to do the calculations that they show in the paper (having to do with exactly what period they observed over and whether they corrected for some subtleties re: the moon's orbit), giving results between 9,192,631,761 and 9,192,631,780. The average was 9,192,631,770.
In 1967, this became the official definition of the SI second, replacing the ephemeris second. But the reason the number is what it is is because Newcomb analyzed observations from 1750 to 1892, and the middle of that period is 1820, and that's how fast the earth was spinning on its axis in 1820. | 0.809925 | 3.797855 |
Having completed a tour of our planet, in which we explored the wonders of the interconnected processes that occur here on Earth, it seems natural to venture out further to the galactic scale. Not that from this point forwards we will follow convention, and denote the Galaxy we currently reside in with a capital G; rather than galaxy which refers to any of the galaxies in the universe.
Shall we briefly tour the solar system before we move out a little further? The solar system is the system of objects orbiting the sun, in which we find ourselves. Before we take a tour of the planet, it makes sense to briefly define one. The definition of a planet is a bit of a scientific hot potato, but the current common sense definition of one is a celestial body that;
- It is big enough for the force of its own gravity to make it roughly spherical;
- Orbits the sun;
- Has swept out a clear path on its orbit round the sun; and
- Is not a satellite of another body.
As a result of this, you may have heard the debate around Pluto which has recently been deemed a dwarf planet. The solar system has been displayed below:
I thought it might be fun to to go through the planets, and list some of the weird and wonderful features:
- Mercury is a terrestrial planet made from rocky materials. Much like the surface of the moon, the surface of which is heavily cratered much like the surface of the moon. There are so many craters on Mercury, showing that the planet has not been subject to the geological processes we are accustomed to. There is virtually no atmosphere on Mercury. The surface temperature is around 170 celsius.
- Venus is shrouded by an atmosphere over 100 times as massive as the Earth comprised mainly of carbon dioxide. Whilst the atmosphere is the most massive of the rocky terrestrial bodies, it is but a fraction of its total mass. There are clouds of sulphuric acid high in the atmosphere which makes investigation difficult. Radar has found evidence of volcanic activity. Due to the atmosphere the surface temperature is around 460 celsius.
- Mars also has a carbon dioxide atmosphere but this is 60 times less massive than the Earth’s. In fact it is a very dry planet with a weak greenhouse effect and a temperature of around -60 celsius. Mars has polar caps of water ice with a layer of carbon dioxide frost present on the northern pole in winter. Clouds are common with water ice and carbon dioxide crystals. There are impact craters, volcanoes and evidence of flowing water at the surface.
Jupiter is a massive planet with a swirling appearance, which is the result of several layers of cloud. The uppermost layer is crystallized ammonia which has been coloured by traces of other substances. Atmospheric winds creates the distinctive cloud patterns. There is no great distinction between atmosphere and interior (except right at the centre), but the materials get hotter and denser until there is a hot ocean of hydrogen and helium with no solid surface. The Great Red Spot of Jupiter is a storm the size of the Earth which has been raging for hundreds of years.
- Saturn has a similar composition to Jupiter and also lacks a solid surface. One of the most distinctive features of saturn is an extensive system of rings made up of small bodies.
- Uranus is broadly similar to Neptune (see below), comprising of icy materials from water, ammonia and methane through to the core. Together we consider them the ice giants.
- Neptune is a sub-giant planet comprised of icy materials, rocky materials, hydrogen and helium. The planet has a very deep atmosphere of hydrogen and helium with a few other gases present. The surface beneath is a planet wide ocean of many materials but most notably water which extends right through to the centre.
Aside from the planets above there are the dwarf planets (Pluto, Eris etc) as well as comets and many many asteroids that orbit the sun. There are around 1,000,00o asteroids bugger than 100m and many many more smaller than this. You may have noticed when you read the above that there are very very few similarities between our planets and the others in the solar system and you would not be wrong. There are some important differences which allow us to experience life as we know it;
- The atmosphere has a significant amount of oxygen;
- The atmosphere and the surface holds large reservoirs of liquid and gas water;
- There are unique geological features such as lithospheric plates; and
- There is a biosphere on Earth.
While all of this may seem very exciting – and it is, it is important to step back and realise the insignificance of Earth. Whilst the solar system seems like everything, your day to day experience should tell you it is not. When you look up into the sky at night the experience you see above you is known as the Milky way. In fact our solar system is just one of many systems of bodies orbiting around a common star. There are millions and millions of stars, with clusters of a few hundred stars being common. All of the matter within the Galaxy orbits around a common centre, what we believe to be a supermassive blackhole. This black hole keeps everything orbiting around a common centre. In fact, our Galaxy is arranged in a disc shape known as a galactic disk, with a large halo shrouding it. Inside this galactic disc there are stars, planets and interstellar matter. Without going into too much detail, have a look at this image:
You can see the bulge in the middle, and the overall shroud of the halo. Within the halo there is little more than globular clusters of stars. In fact the galactic disc on a scale model is a bit like two CDs on top of each other. Now if we were to zoom out and look at our galaxy what we would see is this:
Our galaxy is a spiral galaxy. The arms of the galaxy are illuminated as clouds of gas are sucked into them, being condensed slightly, and forming new stars. In the formation of these stars, the dust and interstellar matter glow brightly giving the arms illumination. This can be seen here:
Important measurements in the 1920s showed that in actuality there are many other galaxies and they lie beyond our own – i.e. the universe is a collection of galaxies rather than just our own. Quasars are superbright lights in the universe, the result of accretion discs around black holes at the centre of galaxies where temperatures reach a million Kelvin. These can be used to map the galaxies in the universe. When we do this we see the universe begins to smooth out on the largest of scales. The following maps have been provided by the wonderful Atlas of the Universe.
This map shows the observable galaxies at a scale of 1bn light years:
And this one shows the largest scale we have, 14bn light years.
We cannot see any more. Why? Because the light has not yet reached us. But do not fear, it is most certainly on its way.
I know that this has been very very brief in places I wanted to keep this around 1000 words and I did not even manage that. If you want more details on any of these items please leave a comment and I will either answer there or write another post. | 0.871131 | 3.590252 |
The purpose of the page is to bring amateur astronomy back to the cities, back to those areas that are affected by heavy light pollution. Amateur astronomy used to be called “backyard astronomy”. This was in the days when light pollution was not a problem, and you could pursue your hobby from the comfort of your backyard. But as cities like Galway grew, so did light pollution, and the amateur astronomer was forced to drive further and further out into the country to escape that light pollution. It is not uncommon today for a Irish city dweller to drive 40 miles to enjoy his/her hobby. But many people do not have the time or the resources to drive great distances to achieve dark skies. That is the reason for the creation of this page, to allow those who want to enjoy the wonders of the heavens in the comfort of their own neighbourhoods to do so, and to maximize the observing experience despite the presence of heavy light pollution.
The Moon, Sun, Planets and bright stars are all easy telescope targets from the city
The Moon, the Sun, and three of the five naked-eye planets always put on great shows through telescopes. All are so bright that they punch right through light pollution, haze, and smog as if those problems weren’t even there. That’s why many urban astronomers prefer to stick close to home when stargazing. No matter where you are, the Moon never fails to please. With one glance, you’re instantly transported into lunar orbit. Dark maria, towering mountains, and all those craters appear so close, it almost feels like you can reach out and touch them. Occultation of bright planets by the Moon viewing can be fascinating and useful.
Many amateurs complain about the Moon’s overwhelming glare, especially during the gibbous and full phases. Although it will not damage your eyes, the Moon’s brightness can be diminished by using a neutral-density Moon filter or by placing a stop-down mask in front of your telescope
The Moon, as impressive from the City as a dark site and a great interest when Deep Sky is unavailable, even to rural dwellers
Seeing can be excellent in cities due to micro-climate
Jupiter – Lots of features, belts, spots and ovals and moon events
Mars – Polar cap, dark markings and clouds are visible
Saturn – Rings and subtle detail on the globe
Phases of Venus
Comets if bright or with Swan band filter. Good target for CCD photography
Artificial satellites and Iridium flares, predictions at heavens-above website
The brightest star in a binary system is designated as the “A” star and is often referred to as the system’s primary. The fainter companion star is dubbed “B.” Usually, the distinction between the two is obvious, but, in some systems with equal-magnitude components, it can be difficult to tell which is which. Adding to the mix, if there are still more stars involved, they will be assigned letters in alphabetical order, such as “C,” “D,” and so on.
Double stars, Alberio in Cygnus (above) one of my favourite doubles
THE URBAN DWELLER
If you live in a high-rise apartment, inquire about using the roof at night. Viewing from a roof gets you above many earthly obstructions, such as trees and possibly other buildings, as well as above many sources of street-level lighting.
There are some disadvantages to keep in mind, however. Roofs absorb a tremendous amount of heat even on the coldest winter day, only to radiate it back out at night. That rising heat can distort the view badly
- In the City, contrast is key
- Refractors offer excellent contrast
- Reflectors :- go for a long f ratio and quality optics, even at the expense of aperture
- Low central obstruction is best
- Try and maximise the Signal/Noise ratio
- Keep optics clean
- Go for a well baffled scope or add baffles to your existing scope
- Consider 7X50 Binoculars for quick looks
- The best telescope is the one that gets used most
- Short setup time means the scope gets used more
- Block out stray light. Use existing structures such as sheds and foliage to block the direct view of Pick the darkest section of your site lights.
During New Moon. The Moon reduces contrast. After 10:00PM. This gives dust and water in the air time to settle. After 11:00PM. Most shops turn off their lights by this time and sky glow is reduced considerably. After 1:00AM. Less traffic on the streets and light pollution is reduced. Ask your neighbours over for an observing session
If you live in a high-rise apartment, inquire about using the roof at night. Hopefully, there is an easy way to access the roof in the first place, either by elevator or a short flight of stairs. Viewing from a roof gets you above many earthly obstructions, such as trees and possibly other buildings, as well as above many sources of street-level lighting. | 0.843923 | 3.135066 |
Web Date: December 18, 2017
Low-energy electrons may spark some space chemistry
When scientists think about what drives chemistry in space, high-energy radiation, such as X-rays, gamma rays, and ions, gets most of the attention. But a new study suggests that low-energy electrons may actually do much of the heavy lifting when it comes to forming bonds and making compounds that could be precursors to complex organic molecules.
Michael A. Huels, Léon Sanche and their groups at the University of Sherbrooke observed these reactions by studying methane and oxygen ice under conditions resembling interstellar space. After bombarding the ice with low-energy electrons, they detected spectroscopic signatures of ethane and ethanol, as well as carbonyl and carboxylate functional groups (J. Chem. Phys. 2017, DOI: 10.1063/1.5003898).
“We have known that significant chemistry occurs from higher-energy sources,” says Stefanie N. Milam of the National Aeronautics & Space Administration, who was not involved in the work. “Demonstrating the formation of complex species from these low-energy electrons provides even further evidence of how readily chemistry is induced and occurs within surfaces both in and out of the solar system.”
When an X-ray or gamma ray strikes an atom or molecule, it can knock off a high-energy electron. That electron then can hit other atoms and molecules as it goes whizzing off, generating secondary, low-energy electrons that could induce the chemistry the scientists observed.
The researchers say this chemistry could happen on grains of ice and dust in nebulae, on comets, or on icy bodies in our solar system such as Jupiter’s moon Europa and Saturn’s moon Enceladus. But Huels is careful to point out that the molecules they observed are still a far cry from the ingredients for life.
“We’re not making a building block of life here,” he says. “We’re making organic molecules that are more complex than the two molecules we put in there. However, the fundamental chemistry would be similar when making small biomolecules in astrophysical or planetary ices.”
Huels’s and Sanche’s groups, which spend most of their time studying the effects of low-energy electrons and ions on DNA during radiotherapy, made ices from methane and oxygen at 22 K and 6 x 10-11 Torr. They fired beams of low-energy electrons—less than 100 eV—at the ices.
Using X-ray photoelectron spectroscopy and thermal desorption mass spectrometry, the team could identify molecules formed in the ice during the electron bombardment. Although the group saw evidence of bonds breaking and new bonds forming, they say they don’t yet know the precise chemical reaction paths happening in the ice.
The next step, according to Huels, is to make more complex ices. His group has now tested ice mixtures containing methane, oxygen, and ammonia, using electrons with energies as low as 8 eV. Huels says they hope to publish those results soon.
- Chemical & Engineering News
- ISSN 0009-2347
- Copyright © American Chemical Society | 0.850751 | 3.823137 |
Chang'e - 1 (Lunar-1 Mission of China)
Chang'e-1 is China's first step in CLEP (China Lunar Exploration Program) of unmanned and eventually of manned missions to the moon announced in early 2003 by CNSA (China National Space Administration). The program is named for the Chinese moon goddess Chang'e (Lady Chang’e flying to the Moon——an ancient Chinese legend reflecting the Chinese people’s desire to explore the unknown world). In January 2004, the Chinese government gave its approval for a three-phase robotic lunar exploration program. The first spacecraft in the program, Chang'e-1, is referred to as the “lunar orbiting mission,” will provide observations in a low lunar orbit of about 200 km altitude. 1)
Note: Chang'e-1 is also spelled as Chang'E-1 as well as CE-1 (Chang'E-1).
The science objectives of the Chang'e-1 mission are:
1) To obtain three-dimensional imagery of the lunar surface
2) To analyze the distribution of useful elements and materials below the lunar surface
3) To probe the features of lunar soil
4) To explore the space environment between the moon and Earth and above the lunar surface.
Background: China's Chang'e moon exploration program, CLEP, comprises three mission phases: a) orbiting, b) soft landing, and c) a sample return mission including a lunar landing.
• In 1991, Chinese space experts proposed lunar exploration program and undertook some advanced research.
• In 1998, CNSA started to define the program.
• In 2001, the White Paper on China’s Space Activities defined the deep space exploration which emphasized on lunar exploration.
• On Jan.23,2004, lunar orbiting project was approved which was the first step for China’s lunar and deep space exploration. 5)
1) Phase 1 of CLEP was designed to be a demonstration of China's technological prowess, involving the launch of lunar orbiters Chang’e-1 in 2007 and Chang’e-2 in 2010.
- Chang'e-1 lunar probe, the moon-orbiting satellite, was launched on Oct. 24, 2007
- Chang'e-2, the second unmanned moon-orbiting mission of phase 1 was launched on Oct. 1, 2010.
2) Phase 2 of CLEP is expected to start in 2013 with the launch of Chang'e-3. In the second phase of the lunar exploration program, two lunar landers will be launched to deploy moon rovers for surface exploration in a limited area.
3) Phase 3 of CLEP is slated for 2017 with the launch of Chang’e-5 on the LM-5E heavy launch vehicle for collecting samples from the lunar surface.
- Also much later in this phase, the program has its sights set on a manned lunar landing for sometime after 2025.
Figure 1: Artist's rendition of the Chang'e-1 spacecraft in lunar orbit (image credit: CAST)
The Chang'e-1 lunar orbiter is based on the DFH-3 communication spacecraft bus series of CAST (3-axis stabilization) with a launch mass of about 2350 kg, about 140 kg of which is the scientific payload (Ref. 3).
Structure: The main body of the DFH-3 bus uses a box-form structure with the size of 2.22 m x 1.72 m x 2.2 m. Use of a central bearing cylinder, honeycomb panel, box, upper module and bottom module.
The DFH-3 satellite consists of the following elements: the propulsion module, service module, communications module, antenna and solar wings. It has 7 subsystems, including the TCS (Thermal Control Subsystem), GNC (Guidance Navigation and Control) also referred to as AOCS (Attitude and Orbit Control Subsystem), EPS (Electric Power Supply) subsystem, TT&C (Telecommand, Telemetry and Tracking) subsystem, propulsion subsystem, structure and communications subsystem, etc..
TCS: Use of active and passive control, thermal paint, multilayer thermal blankets and insulation material, heater, sensors, heat pipe and controller.
GNC: The spacecraft 3-axis stabilized using the zero-momentum method. Attitude sensing is provided by sun sensors, star trackers, gyroscopes, and UV sensors. Actuation is provided by reaction wheels and thrusters. The pointing accuracy is < 1 (3σ) and the stabilization accuracy is < 0.01º/s.
Figure 2: Illustration of the Chang'e-1 spacecraft in launch configuration (image credit: CAST)
EPS: Single dimension symmetric solar panel, Si solar cell, area of 22.7 m2, max power output of 1450 W, use of NiH battery (output 48Ah@End Of Life). The solar wing span is 18 m (5.7 m height) with a power provision of 1.7 kW at EOL. A mission life of at least 1 year is expected in lunar orbit.
Propulsion: Use of bi-propellant thrusters with MOH (Metal Hydroxide) and N2O4, for slow spin, angular rate damper, attitude control and orbit maneuver, 1 x 490 N and 2 x (6 x 10 N) thrusters.
OBDH (On-Board Data Handling) subsystem: Use of a two-level distributed redundant subsystem, CTU (Central Terminal Unit), 4 RTUs (Remote Terminal Units), one TCU (Telecommand Unit), one set of redundant SDB (Serial Data Bus).
PDMS (Payload Data Management System): The PDMS is a distributed system based on the STD-MIL-1553B data bus. It consists of the BC (Bus Controller), SSR (Solid State Recorder), HRM (High Rate Multiplexer), RT (Remote Terminal) and PPD (Payload Power Distributor). BC manages and controls the STD-MIL-1553B data bus communication.
The stereo camera, the microwave radiometer, the Gamma and X-ray spectrometer access the system via the STD-MIL-1553B data bus. The laser altimeter, the high energy particle detector and solar wind ion detectors connect the system via RT. The Sagnac-based interference image spectrometer transmits the image data via a high-speed channel to SSR. 6)
PDMS provides and distributes the power for all the payloads. The imaging, science and housekeeping data of the experiments are acquired, compressed, packed, and stored by PDMS.
Figure 3: Block diagram of the PDMS (image credit: CSSAR/CAS)
RF communications: The TT&C (Telemetry, Tracking and Command) S-band system has been formed based on China’s existing TT&C technology. Use of a high-gain directional antenna and a medium-gain omni-directional antenna for the probe. Channel encoding is implemented for the downlink channel, using both high and low data rates for information transmission. An upgrade of ground equipment terminals (12 m dish antenna) was needed.
To provide accurate navigation for the probe during its Earth–Moon flight and initial lunar orbiting flight, China’s VLBI (Very Long Baseline Interferometry) system, designed for astronomical observations, will be used besides the ranging and range rate measurement capabilities of the S-band TT&C network. X-band beacon for VLBI. The goal is to provide 100 m accuracy in position determination during lunar orbit. 7)
The PDMS acquired science and housekeeping data are stored in the SSR (Solid State Recorder) or in the payload memories. During an overpass, the stored data and the real time data are being interpolated and encapsulated by the HRM (High Rate Multiplexer) to form a serial of transfer frame according to the CCSDS standards; they are transmitted to the Earth station by S-band transmitters. The downlink data rate is 3 Mbit/s.
The SSR has a storage capacity of 48 Gbit. An image data compression board is included in the SSR, compression radio ≥ 2, which depends on the complicity of the original image.
Table 1: Some parameters of the CE-1 orbiter
Figure 4: Photo of the Chang'e-1 spacecraft (image credit: CAST)
Launch: A launch of Chang'e-1 took place on Oct. 24, 2007 on a CZ-3A (Changzheng-3A or Long March-3A) vehicle from the Xichang Satellite Launch Center in southwest China. 8)
Orbit: The orbit is divided into four phases. The transfer orbit from Earth to the moon will take about 8-9 days.
- Initial launch phase: The probe was launched to a highly elliptical orbit with 31º inclination, 200 km perigee and 51000 km apogee
- Apogee raising phase: The apogee of the orbit was raised gradually by conducting maneuver at perigee three times in a row. After the third maneuver, the probe left the Earth orbit and entered into an Earth-moon transfer orbit.
- Earth-moon transfer orbit (or cislunar transfer orbit): After a journey of 13 days, Change'e-1 reached the moon on Nov. 5, 2007. The spacecraft was decelerated by control rocket propulsion in the reverse direction and was captured by the lunar gravitation to become an orbiter around the moon.
- Lunar orbit: Chang'e-1 operated in a lunar polar orbit with inclination of 90º to lunar equatorial plane. It adopted a circular lunar polar orbit to acquire the remote sensing images with the same resolution along the whole orbit. The selected lunar altitude was 200 km.
- The lunar polar working orbit of 200 km altitude (period of 127 minutes) was reached on Nov. 7, 2007.
Figure 5: Illustration of the Chang'e-1 orbit transfer to the moon (image credit: CNSA)
The precise orbit determination of Chang’e-1 was performed mainly by using two-way USB (Unified S-band) Doppler and range data collected by Qingdao (120.19°E, 36.04°N) and Kashi (76.03°E, 39.51°N) stations, and was assisted with VLBI delay and delay rate data collected by four VLBI stations, with 25 m at Shanghai and Urumuqi, and 50 m at Beijing and Kunming. - Generally, 3 to 4 tracking passes per day were provided, all data were sampled at 1 Hz (Ref. 24).
The VLBI system for Chang'e-1 was developed by SHAO (Shanghai Astronomical Observatory). SHAO was also responsible for VLBI tracking during the mission. The VLBI assembly consisted of the Shanghai VLBI data processing and command center, four radio telescopes (observation station) located in Shanghai, Beijing, Kunming, and Urumqi, respectively. From Oct. 27 to Nov. 8, 2007, the VLBI system carried out the tracking mission perfectly. The mission phases included the phase-modulation orbit, the Earth-moon transfer orbit and the lunar capture orbit. During this mission, VLBI provided the probe delay, delay rate and the angle position with very high precision to the BACC (Beijing Aerospace Control Center), and it also took part in the near real-time orbit determination and prediction. The results showed, that VLBI has made an important contribution to the CE-1 probe entering the predicted lunar orbit smoothly and safely.
Figure 6: The Chang'e-1 spacecraft (image credit: CNSA)
Status of mission:
• On 1 March 2009, at 08:13:10 UTC, Chang'e-1 was guided to crash onto the surface of the moon, ending its 16-month mission (impact area: 52.36ºE, 1.50ºS, in the north of Mare Fecunditatis). - The total mission lasted 495 days, exceeding the designed life-span by about four months. During its orbital mission, the probe transmitted 1.37 TB of data to the ground stations. The received data was processed into 4 TB of science data at different levels. 9) 10) 11) 12)
• Feb. 2, 2009: The spacecraft made its third pass into the lunar eclipse.
• Dec. 20, 2008: On this date, the project commanded the spacecraft back to its 100 km circular orbit (Ref. 3).
• On Dec. 12, 2008, the first global image of the moon was released.
• Although the Chang’e 1 mission officially ended in October 2008, the spacecraft continued flying for another four months to conduct further tests to gain experience for future probe missions.
- On 19 Dec. 19, 2008, the spacecraft was lowered again into an elliptical orbit of 15 km x 100 km.
- On Dec. 6, 2008, the Mission Control commanded the spacecraft to lower its orbit to an altitude of 100 km above the lunar surface.
• Oct. 24, 2008: One year after launch, all scientific and engineering goals were accomplished (Ref. 13).
• Aug. 17, 2008: The spacecraft made its second pass into the lunar eclipse.
• Feb. 21, 2008: The spacecraft made its first pass into the lunar eclipse.
• Between November 2007 and October 2008, Chang’e 1 carried out various exploration missions in the lunar orbit, including obtaining three-dimensional images of lunar surface and making outline graphs of lunar geology and structures; searching for useful elements on the lunar surface and analyzing the elements and materials; examining the features and depth of the lunar soil; and exploring the space environment between 40,000 km and 400,000 km from Earth (Ref. 10).
• The science data from Chang’e-1 were received by the Beijing and Kunming ground stations. The first image from the moon's surface was released on Nov. 26, 2007, captured by the CCD camera (120 m resolution). On Dec. 11, 2007, CNSA published more images from the Chang’e-1 spacecraft, showing the far side of the moon.
Figure 7: First lunar image of the CCD camera (image credit: CNSA)
Legend to Figure 7: The first lunar image of Chang’e-1 is a combination product of 19 tracks image data received in the period Nov. 20–21, 2007. Image size: 280 km x 460 km. It was released on Nov. 26, 2007.
• Nov. 20, 2007: The various payload instrument were powered on. Also on November 20, the GRAS (Ground Research & Application System) of CLEP (Chinese Lunar Exploration Program ) received the first lunar image transmitted from CE-1. 13)
• Nov. 7, 2007: The spacecraft reached a circular polar orbit with an altitude of 200 km and a period of 127 minutes.
• Nov. 5, 2007: First LOI (Lunar Orbit Injection) into lunar polar orbit.
• Oct. 31. 2007: The spacecraft entered into LTO (Lunar Transfer Orbit).
Some results/achievements of the mission:
1) MRM (Microwave Radiometer, Moon):
Prior to Chang'e-1, there was no passive, multi-channel, microwave remote sensing of the moon from a satellite. Chang'e-1 had a polar orbit and, thus, was able to observe essentially every location of the moon with a nadir view. Thanks to the long lifespan of Chang'e-1, the MRM obtained brightness temperature data that cover the moon globally eight times, during both lunar daytime and nighttime periods. This global, diurnal coverage provides extremely valuable data for studying the lunar regolith (‘dust’ and impact debris covering almost the entire moon surface). 14) 15) 16)
The Chang'e-1 microwave observations have made several important breakthroughs. MRM passively measured microwave emission in four frequency channels: 3, 7.8, 19.35, and 37 GHz. The higher frequency emission comes from a layer just a little below the surface (a few centimeters), whereas the lower frequency emission can probe depths beyond a few meters. With such penetrative ability, the microwave data can be used to infer thermo-physical properties of the lunar regolith, as well as, to find out about the variation of regolith thickness across the lunar surface. Such information is useful for estimating the distribution and amount of helium 3, a promising nuclear fuel for in situ fusion energy production in the future human settlements on the moon.
Using the MRM data, global brightness temperature maps (TBL) of the moon were constructed at NAO/CAS (National Astronomical Observatories/ Chinese Academy of Sciences) for different frequencies, and separately for day and night times. The results are particularly revealing. On the 37 GHz daytime map (Figure 8), the maria, which appear dark in visible light, become bright in microwave wavelengths to reflect the higher temperatures (due to stronger absorption in the solar visible spectrum).
Figure 8: Daytime brightness temperature map of the moon from China's first lunar probe Chang'e-1 (37 GHz), image credit: European Media Center
Figure 9: Nighttime brightness temperature map of the moon from China's first lunar probe Chang'e-1(37 GHz), image credit: European Media Center
Figure 10: 3.0 GHz TBL image of the moon, nearside (left) and farside (right), image credit: NAO/CAS, (Ref. 16)
2) Coregistration of Stereo Camera images and LAM (Laser Altimeter, Moon) data: 17)
A 1:2.5 million scale global image mosaic has been produced using the CCD images after radiometric and geometric processing, map projection, mosaicking and editing. In the process, LAM DEM (Digital Elevation Model) data was used to correct the positional errors of the geometric processing results.
A coregistration method for stereo imagery and laser altimeter data for 3D high-precision mapping of the lunar surface was performed at IRSA (Institute for Remote Sensing Applications), Beijing, China. In ground processing, DEM imagery is automatically generated from the CCD stereo images based on rigorous pushbroom sensor model and multi-level image matching. The DEM imagery is then registered to LAM data through surface matching with a 3D rigid transformation model. Consequently, the exterior orientation parameters (EOPs) of the images are adjusted using the rigid transformation model so that the images and LAM data are co-registered.
A lunar topography model was obtained based on laser altimeter data. From this model, middle scale volcano and basins have been discovered.
In day time, the strong illumination from high altitude and high albedo rate radical craters will introduce the illumination effect on observing the nearby low altitude, low albedo rate and shallow small slop rate area seriously, and even can “hide” the later area from the light. Based on the lunar global topography model obtained by the Chang’e-1 mission, and by comparing with the lunar gravity model, a volcano named “YUTU Mountain” has been identified. It is a volcano with diameter of ~300 km and a height of ~2 km, located at (14ºN, 308ºE) in Oceanus Procellarum. Besides, the DEM of another volcano named “GUISHU Mountain” in the same area has been improved. 18)
Figure 11: Two volcanos in lunar nearside
Geophysical characteristics of the moon: The altimetry data make possible improved estimates of the fundamental parameters of the moon’s shape, which are principally derived from the long-wavelength spherical harmonic coefficients. The mean radius of the moon given by CLTM-s01 (Chang’E-1 Lunar Topographic Model) is 1737013 m, and by rotating a flattened ellipsoid to fit the gridded data, the mean equatorial radius, the mean polar radius and the flattening were determined to be 1737646 m, 1735843 m, and 1/963.7526, respectively. 19)
Figure 12: Topography of the moon from the Chang’E-1 laser altimeter data (image credit: SHAO/CAS, Wuhan University)
Legend to Figure 12: The map is shown in a global Mollweide projection with a central meridian of 270º E, where the near side and far side hemispheres are on the right and left, respectively. The longitudinal and latitudinal grid lines are spaced at an interval of 30º°.
Sensor complement: (CELMS, Stereo Camera, IIM, LAM, GRS, GXS, HPD, SWID)
The sensor complement had a mass of ~ 140 kg and was developed by CSSAR ( Centre for Space Science and Applied Research) of CAS ( China Academy of Sciences). To achieve the science goals of the mission, eight scientific instruments were chosen as the payloads of the Chang’e-1. 20) 21)
In order to collect, process, store and transmit the scientific payload data, a PDMS ( Payload Data Management System) is included (PDMS is described under the spacecraft).
CELMS (Change'e-1 Lunar Microwave Sounder):
CELMS, also referred to as MRM (Microwave Radiometer, Moon), is the main scientific instrument of the mission, a four-frequency microwave radiometer at 3 GHz, 7.8 GHz, 19.3 GHz and 37 GHz. The objective is to monitor the penetration depth of the frequencies into the lunar regolith. The lowest frequency offers the greatest penetration depth. The project selected the frequency of 3 GHz to be able to accommodate the required antenna size on the spacecraft. The highest frequency of 37 GHz is being used to obtain the emission (brightness temperature) from the lunar surface. The two mid-frequencies of 7.8 and 19,35 GHz are used to acquire an internal layer construction and thermal contributions. 22)
Table 2: Overview of the CELMS instrument parameters
On the basis of the lunar brightness temperature (TBL), the project established MicM (Microwave Moon), the world's first microwave map covering the entire moon surface. The MicM survey is not only important for lunar resources and applications, but is also valuable for lunar and cosmic science.
Figure 13: CELMS accommodation on the spacecraft (image credit: NMRS)
CHELMS calibration: In-orbit calibration is performed using two targets with known radiations; one is the matched load within the receiver with temperature TH, the other is directed into cold space with radiation of TC=2.7 K at CELMS frequencies.
Figure 14: CELMS calibration block diagram (image credit: NMRS)
Figure 15: Near and far side TBL map in ortho-projected configuration for 37 GHz night (image credit: NMRS)
Figure 16: Near and far side TBL map in ortho-projected configuration for 3 GHz night (image credit: NMRS)
Lunar regolith involves not only the history of lunar formation, but provides also abundant information concerning the origin of the Solar system including Earth. In addition, the regolith provides important information concerning the geology of the moon.
The regolith layer thickness values are obtained from estimates with direct and indirect methods. The thickness values estimated by different methods and different scientist differed much since there were no global data references for such estimations. Figure 17 represents a global map of regolith layer thickness values estimated by CELMS. It appears that CELMS results are thinner when compared to other results.
Figure 17: Global map of regolith layer thickness values estimated by CELMS (image credit: NMRS)
The objective of the CCD imager is to provide stereo imagery at a resolution of about 120 m. The imager is a three-line instrument, observing the target area from three different view angles (forward, nadir and backward), which made it possible to get DEM data and orthophoto image data of the global lunar surface. 23)
Table 3: Specification of the stereo camera (Ref. 21)
The stereo camera consists of an optics subsystem, a framework to support optics lens, the plane CCD array and corresponding signal processing subsystem. The three parallel rows of the plane CCD arrays can get the nadir (0º), forward (17º), and backward (-17º) view of the moon's surface simultaneously as the spacecraft moves forward.
Figure 18: Schematic diagram of stereo camera (image credit: CSSAR, Ref. 21)
Figure 19: Observation configuration of the Stereo Camera with corresponding lunar ground track (image credit: IRSA, Ref. 17)
From Nov. 20, 2007 to July 1, 2008, the CCD camera successfully mapped the whole surface of the moon, including the polar areas, where the solar illumination was quite weak. Generally, orbital image data are distinctly affected by altitude, solar elevation angle, incidence angle, view angle, exposure time, etc.
The stereo camera and the Sagnac-based interferometer spectrometer imager (IIM) are integrated together (Figure 20).
IIM (Sagnac-based Imaging Interferometer Spectrometer):
The objective of IIM is to obtain the multispectral imagery of the lunar surface. The VNIR (Visible Near Infrared) reflectance properties of moon are sensitive to the mineralogy, mineral chemistry and physical states of lunar regolith. From the imagery, the distribution of major types of minerals and rocks can be identified (Ref. 20). The instrument features 32 channels in the spectral range of 480 - 960 nm.
Table 4: Relative errors of wavelength of the IIM channels
The IIM instrument is a Sagnac-based pushbroom Fourier transform imaging spectrometer, which operates from visible to near infrared (0.48-0.96 µm). IIM yields a ground resolution of 200 m/pixel and 25.6 km swath width.
On Nov. 26, 2007, the IIM instrument was powered on. The real time data was transmitted to GSDSA (Ground Segment for Data, Science, and Applications) of China’s Lunar Exploration Program.
Figure 21: Schematic diagram of the Sagnac spectrometer imager (image credit: CSSAR)
Figure 22 shows the calculated 3-band picture, the synthesized pseudo-color picture using the 3 bands, the interference pattern and the recovered spectrogram.
Figure 22: Preliminary results of the IIM instrument (image credit: CNSA)
Legend to Figure 22: (a) Interferogram, (b) interferogram curve, (c) spectral curve.
- 1) Spectral image of band No. 4 (its center of wavelength is 504.96 nm)
- 2) Spectral image of band No. 17 (its center of wavelength is 644.64 nm)
- 3) Spectral image of band No. 30 (its center of wavelength is 891.11 nm)
- 4) False color image from band No. 4, No. 17, No. 30
- 5) Relative location of imaging area in the same orbit between image interferometer and CCD Stereo Camera.
LAM (Laser Altimeter, Moon):
LAM is designed to measure the distance between the spacecraft and the nadir point of the lunar surface. The instrument consists of a laser transmitter and a laser receiver. The laser transmitter utilizes a laser diode pumped Q-switched Nd:YAG laser. The output beam divergence is improved to 0.6 mrad by a Galileo refractor-type collimator. The return pulses are captured by a Cassegrain-type reflector whose aperture is 140 mm; the signal is detected by the Si-APD detector. The travel time of a pulse gives the information of the distance between satellite and the lunar surface.
The LAM circuit box was a control unit used to do the distance measuring and to supply the laser power. The size of the device was 260 mm x 200 mm x 190 mm with a mass of 5.8 kg.
Table 5: Overview of LAM parameters 24)
Figure 23: Photo of the LAM instrument (image credit: CSSAR)
The LAM instrument was installed parallel to the Z axis of the orbiter. The boresight of LAM was parallel to the CCD detecting system with a measuring accuracy of ±1'. Both the laser transmitter and receiver telescope were installed facing the lunar surface. The along-track shot spacing was about 1.4 km, assuming a 100% laser ranging probability, and the minimum foot spacing along the equator should be about 7.5 km after two months of measurements.
On Nov. 28, 2007, the LAM instrument was powered on. After several days adjustment and test in orbit, it was changed into normal operating status. The laser altimeter can obtain the elevation data of the whole lunar surface. The data will be used to produce the DEM map of the whole lunar surface.
Figure 24: One orbit raw data taken by LAM start from 11:25:00 LT Dec. 14, 2007 (image credit: CSSAR)
GRS (Gamma-Ray Spectrometer):
The objective of GRS is to observe the abundance of chemical elements, like C, O, Mg, Al, Si, K, Ca, Fe, Th and U on the planetary surface. The main detector of the instrument is a 12 cm diameter x 7.6 cm long CsI (Tl) crystal. It is surrounded on the sides and back by a single CsI crystal shield, approximately 3 cm thick. This CsI reduces the gamma rays coming from the spacecraft, shield as a charged particle shield, and reduces the Compton background. Two gamma-ray spectra are collected simultaneously, the raw CsI shield spectrum and the CsI (Tl) spectrum in anticoincidence with the CsI shield.
Table 6: Main parameters of the GRS instrument (Ref. 21)
GRS started the normal observation since Nov. 28, 2007. Sample spectra from the GRS that illustrate the excellent quality of the data are shown in Figure 25. It can be seen that typical gamma-ray lines from the lunar surface are clearly identified in the spectra. The data analysis shows that the main performance parameters of GRS met all the requirements of the design.
Figure 25: Pulse height spectrum of GRS measured during the first run on Nov. 28, 2007 (image credit: CSSAR)
Figure 26: Photo of the GRS instrument (image credit: CSSAR)
XRS (X-Ray Spectrometer):
The main goal of XRS is to detect the fluorescent X-ray from the lunar surface and to provide the abundance distribution of three major rock-forming elements: Mg, Al, and Si on the moon. When solar X-rays or Cosmic-rays bombard the lunar surface, some elements will emit fluorescent X-rays. The species of these elements can be identified using the characteristic lines and the abundances can be determined from the intensity of the emitted X-rays.
The elemental composition is crucial in studying not only the geochemical nature of terrains on the moon's surface, but also the history of the impact activity and the volcano-tectonic past on the moon.
The XRS instrument consists of the lunar X-ray detector and the solar X-ray monitor. The XRS is based on Si-PIN diode technology, a kind of semiconductor detector, having a better energy resolution and less mass than those of proportional counters, which were used in the Apollo-15 and -16 missions and the NEAR mission to the asteroid Eros.
Two types of Si-PIN sensors are used in XRS. One is for the lunar SXD (Soft X-ray Detector) for observations in the range of 1-10 keV, and the other is for the HXD (Hard X-ray Detector) for observations in the range 10-60 keV.
Table 7: Main parameters of the XRS instrument
The commissioning of the XRS instrumentation started at the end of November 2007. It turned out that this activity was coincident with the solar minimum cycle. For comparison, data from NOAA's GOES-XRS instrument, which measured the solar X-ray flux in a soft (1-8 Å and hard (0.5-3 Å) energy band, also showed that during the quiescent period, the solar X-ray had the lowest flux around the A0.3 level.
As the result, no significant elemental characteristic line was found even in the co-adding spectrum from all the SXD during several hours of integration. This situation lasted no longer than 10 days. On December 5, 2007, a sunspot appeared on the east limb of the sun. Then, the solar X-ray flux increased gradually and reached the B1-level in a few days. From Dec. 20, 2007 onwards, the solar X-ray flux started to fall as the sunspot rotated around to the far side of the sun.
During the 15 days of the solar flare period, the spectra obtained by XRS indicated that low energy lines (Mg: 1.25 keV; Al: 1.49 keV and Si: 1.74 keV) were observed and the Calcium (Kα ) line (3.69 keV) was also unambiguous.
Figure 27: Photo of the XRS instrument (left) and the solar X-ray monitor (right), image credit: CSSAR
At the end of the year 2007, a big long duration C-class solar flare began at 00:30UTC on Dec. 31, 2007, as shown in Figure 28. The flare lasted for more than 2 hours and nearly 50 minutes corresponding to the XRS observations. When the flare reached its peak C8.7 level, the Chang’e-1 satellite was just flying over the south pole and started to travel northward on the far side of the moon.
Figure 28: Counts rate (1.5-10 keV) and instantaneous spectra (1.5-3 keV) from the solar X-ray monitor (b) and (c). Data from GOES are shown (in units of W/m2) for comparison (a), image credit: CSSAR
The ground track of XRS was along the 93ºW longitude during the big X-ray flare, and the footprints of the 4 SXD units included areas of only highland lithologies expect the Mare Orientale.
The spectra shown in Figure 29 indicate that a merged peak of low energy lines (Mg: 1.25 keV, Al: 1.49 keV, Si: 1.74 keV) was detected and the Ca Kα-line (3.69 keV) and the Fe Kα (6.40 keV) and Kβ (7.06 keV) lines are also prominent. The quite strong Ca peak implies that the area of lunar surface observed should be enriched in calcium, which coincides with the highland components (eg: anorthosite).
Figure 29: Rough spectra from 1 SXD unit represent a merged peak of magnesium, aluminum and silicon and the unambiguous characteristic lines from calcium and iron (image credit: CSSAR)
HPD (High-energy Particle Detector):
The objective of HPD , also referred to as HSPD (High-energy Solar Particle Detector), is to observe the heavy ions and protons in the space around the moon. The protons with energies in the range of 4-400 MeV can be detected. The heavy ions, such as He, Li and C, will also be analyzed. Three slices of the semiconductor detector make up the telescope sensor system. When cosmic particles go through the semiconductor detectors, their deposit energy could form electric pulse that is amplified for count. Analyzing the height of the pulse can identify the different particles.
Table 8: Main particles detected with the HPD
Figure 30: Photo of the HPD instrument (image credit: CSSAR)
On Oct. 26, 2007, the HPD was powered on. Figure 31 shows the housekeeping parameters response while the satellite was crossing the radiation belt from 2007-10-27 to 2007-10-29. When the satellite entered the moon orbit, the science data show the background noise only and no high energy particle flux was measured due to the solar quiet year. This result is consistent with the anticipation.
Figure 31: The HPD housekeeping parameters show that the satellite is crossing the radiation belt (image credit: CSSAR)
SWID (Solar Wind Ion Detector):
Two SWID instruments are designed to analyze the ions with low energy in the same space with high-energy particle detector (HPD). The two detectors are vertical to each other. The solar wind ion detector consists of a collimator, an ion analyzer, and a MCP amplifier.
Table 9: Main parameters of SWID
Figure 32: Photo of the SWID instrument (image credit: CSSAR)
On Nov. 26, 2007, SWID was powered on. The observation results show that the data detected by SWID changed periodically, 127 minutes per cycle, and this was identical with the satellite orbit period. Due to the accommodation of the two detectors, installed in different directions, the SWID A instrument detected the solar wind in certain polar angles with the counts increasing and decreasing. The SWID B instrument detected the solar wind in all polar angles with the similar diversification of SWID A. All the results are consistent and expected; they show the instrument is operating nominally and the detected data are available.
On Dec. 8-9, 2007, the moon was in line between the sun and the Earth, the instruments detected the solar wind plasma.
Figure 33 shows the periodic changes of counts with accumulator of 48 energy steps together at the 9th polar angle with the detector in the solar wind.
ESA tracking support for the Chang'e-1 mission:
The People's Republic of China and ESA have a long history of scientific collaboration. The first co-operation agreement was signed in 1980, to facilitate the exchange of scientific information. Thirteen years later, the collaboration focused on a specific mission, ESA’s Cluster, to study the Earth's magnetosphere. Then, in 1997, the CNSA invited ESA to participate in Double Star, a two-satellite mission to study the Earth’s magnetic field, but from a perspective which is different from that of Cluster and complementary to it.
During ESA's SMART-1 mission, which ended in September 2006, ESA/ESOC provided China with details of the spacecraft's position and transmission frequencies so that the Chinese could test their tracking stations and ground operation procedures by following it - a part of their preparation for Chang'e-1. 25)
During the development phase of the Chang'e-1 spacecraft, ESA's ground station network ESTRACK was mobilized to provide direct support to China's Chang'e-1 moon mission. The Chang’e-1 mission was supported from the ESA ground stations in Maspalomas and Kourou. During the track on Nov. 1, 2007 for the first time, ESA tracking stations have transmitted telecommands to a Chinese satellite. 26) 27)
This was the culmination of a long preparation performed by BACC (Beijing Aerospace Control Center) and ESOC (European Space Operations Center) that started nearly two years before the launch, where a Chinese delegation visited ESOC in 2005 to explore the possibilities for ESOC to provide tracking support to Chang’e-1. Following detailed discussions on the support ESOC and BACC agreed in February 2006 on a contract to provide the required support.
Following the agreement on the cooperation, ESOC and BACC were faced with the problem of connecting two systems: the BACC missions control system and the ESOC ground station network ESTRACK; this had to happen within the relative short period of one year. The ESOC proposal to BACC was based on ESOC’s model for providing cross support to other agencies such as NASA and JAXA and the proposal from ESOC was to provide the Chang’e-1 support based on CCSDS standards and therefore to provide systems interoperability without modifying the BACC system and the ESOC system. This model hides the implementation on both sides and only defines the interfaces needed to be support on both sides. To measure the success of this the project used the connect of verification and validation.
• Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
• Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfied the specified requirements.
The key to success for this support was the use of a SLE (Space Link Extension) gateway on both sides; therefore ESOC and BACC defined a roadmap to ensure the SLE user function, to be implemented at BACC. The BACC implementation was based on the ESOC SLE API (Application Programming Interface) communication server (using TCP/IP protocols for communications). The problem was that the Chang’e-1 telemetry frame was not compatible with the CCSDS recommendations.
The final suite of CCSDS standards applied to Chang’e-1 (as well as to the follow-up mission Chang’e-2) were:
• Telemetry: SLE Return All Frames (RAF)
• Telecommanding: SLE Command Link Transmission Units (CLTU)
• Orbit Data: Orbit Ephemeris Message (OEM)
• Tracking Data: Tracking Data Message (TDM).
Figure 34: Overall architecture for the Chang'e mission cross-support service (image credit: ESOC, BACC)
Operations: ESA ground tracking support to China's Chang'e-1 successfully started on November 1, 2007 with the first receipt of telemetry signals from the Chinese mission at ESA's 35 m deep-space station at New Norcia, Australia. Two hours and 39 minutes later, the first telecommands to Chang'e-1 were transmitted via ESA's 15 m station in Maspalomas, Spain, when the satellite was nearly 200,000 km from the Maspalomas station. An hour later, the ESA station in Kourou, French Guiana, also successfully received telemetry and transmitted commands to Chang'e-1.
New Norcia, Maspalomas and Kourou stations are part of ESA's ESTRACK ground station network, and are remotely controlled from ESOC (European Space Operations Center) in Darmstadt, Germany. The successful communications marked a major milestone as this was the first time a telecommand to a Chinese spacecraft has been transmitted from an ESA station. In addition to the receipt of telemetry and transmission of telecommands, the Maspalomas and Kourou stations also performed ranging and Doppler measurements used to determine the spacecraft's location and direction (Ref. 26).
1) Li Wang, X. Hou, H. Wang, Q. Zhang, Z. Song, W. Shi, J. Chang, “Present and Prospects for the Chinese Exploration Technology,” Proceedings of the 57th IAC/IAF/IAA (International Astronautical Congress), Valencia, Spain, Oct. 2-6, 2006, IAC-06-A3.6.04
2) “Chang’e Program—China’s Lunar Exploration Activities,” URL: http://sci2.esa.int/Conferences/ILC2005/Manuscripts/HaoXifan-01-DOC.pdf
4) Maohai Huang, “China’s Chang’E Program,” Astronomy from the Moon WG Meeting, 26th IAU GA (International Astronomical Union General Assembly), Prague, Czech Republic, Aug. 14-16, 2006, URL: http://www.astron.nl/moon/pdf/Maohai%20Huang%20-%20Change%20-%20IAU%20GA.pdf
5) “China’s First Lunar Probe - Chang’e 1,” CAST, 2012, URL: http://www.cast.cn/CastEn/Show.asp?ArticleID=17879
6) Huixian Sun, Xiaomin Chen, “The Payload Data Management System for Chang'e-1,” Proceedings of the 59th IAC (International Astronautical Congress), Glasgow, Scotland, UK, Sept. 29 to Oct. 3, 2008, IAC-08.A3.2.A6
7) Yu Zhi-jian, Lu Li-chang, Liu Yung-chun, Dong Guang-liang, “Space operation system for Chang’E program and its capability evaluation,” Journal of Earth System Science, Vol. 114, No 6, Dec. 2005, pp. 795-799, URL: http://www.ias.ac.in/jess/dec2005/ilc-26.pdf
9) Ouyang Ziyuan, Li Chunlai, Zou Yongliao, Zhang Hongbo, Lu Chang, Liu Jianzhong, Liu Jianjun, Zuo Wei, Su Yan, Wen Weibin, Bian Wei, Zhao Baochang, Wang Jianyu, Yang Jianfeng, Chang Jin, Wang Huanyu, Zhang Xiaohui, Wang Shijin, Wang Min, Ren Xin, Mu Lingli, Kong Deqing, Wang Xiaoqian, Wang Fang, Geng Liang, Zhang Zhoubin, Zheng Lei, Zhu Xinying, Zheng Yongchun, Li Junduo, Zou Xiaoduan, Xu Chun, Shi Shuobiao, Gao Yifei, Gao Guannan, “Chang’E-1 Lunar Mission: An Overview and Primary Science Results,” Chinese Journal of Space Science, Vol. 30, 2010, pp. 392-403, URL: http://www.cjss.ac.cn/qikan/manage/wenzhang/2010-05-02.pdf
10) Dragon in Space Website, URL: http://www.dragoninspace.com/lunar-exploration/change1.aspx
11) “Missions to the Moon,” Planetary Society, URL: http://www.planetary.org/explore/space-topics/space-missions/missions-to-the-moon.html
12) Huang Yanhong, “Chang'E-1 has blazed a new trail in China's deep space exploration,” Space Ref, Dec. 2, 2009, URL: http://www.spaceref.com/news/viewpr.html?pid=29723
13) W. Zuo, Z.-B. Zhang, X.-Q. Wang, L. Geng , X. Xiao, J.-Z. Liu, J.-J. Liu, X. Tan, X. Ren, L.-Y. Zhang, X.-D. Wang, J.–Q. Feng, L.-L. Mu, G.-L. Zhang, C.-L. Li, “Public Data Release of the Chinese Chang’E Missions,” EPSC Abstracts, Vol. 6, EPSC-DPS2011-995-1, 2011, URL: http://meetingorganizer.copernicus.org/EPSC-DPS2011/EPSC-DPS2011-995-1.pdf
14) Yong-Chun Zheng, Kwing L. Chan, “The First Microwave Image of the Complete Moon,” European Planetary Science Congress 2010, Sept. 20-24, 2010, Rome, Italy, URL: http://lunarscience.nasa.gov/articles/the-first-microwave-image-of-the-complete-moon/
15) Yong-Chun Zheng, Y. L. Zou, K. L. Chan, K. T. Tsang, B. Kong, Z. Y. Ouyang, “Global brightness temperature of the Moon: result from Chang’E-1 microwave radiometer,” EPSC Abstracts, Vol. 5, EPSC2010-224, 2010, European Planetary Science Congress 2010, URL: http://meetings.copernicus.org/epsc2010/abstracts/EPSC2010-224.pdf
16) Yong-Chun Zheng, K. L. Chan, K. T. Tsang, F. Zhang, Y. L. Zou, Z. Y. Ouyang, “The first microwave image of the complete moon from Chang'e-1Lunar Orbiter,” 42nd Lunar and Planetary Science Conference (2011), The Woodlands, TX, USA, March 7-11, 2011, 1352.pdf, URL: http://www.lpi.usra.edu/meetings/lpsc2011/pdf/1352.pdf
17) K. Di, Z. Yue, M. Peng, Z. Lin, “Coregistration og Chang'E-1 Stereo Images and Laser Altimeter Data for 3D Mapping of Lunar Surface,” Special joint symposium of ISPRS Technical Commission IV & AutoCarto in conjunction with ASPRS/CaGIS 2010, Fall Specialty Conference, November 15-19, 2010, Orlando, Florida, USA, URL: http://www.isprs.org/proceedings/XXXVIII/part4/files/Di.pdf
18) J. S. Ping, X. L. Su, Q. Huang, “Recent Selenodetic Progress in Chang'e Lunar Mission,” 41st Lunar and Planetary Science Conference (2010), The Woodlands, TX, USA, March 1-5, 2010, 1059.pdf, URL: ftp://ftp.lpi.usra.edu/pub/outgoing/lpsc2010/full616.pdf
19) Q. Huang, J. S. Ping , M. A. Wieczorek, J. G. Yan, X. L. Su, “Improved global lunar topographic model by Chang'e-1 laser altimeter data,” 41st Lunar and Planetary Science Conference (2010), The Woodlands, TX, USA, March 1-5, 2010, 1265.pdf, URL: http://www.lpi.usra.edu/meetings/lpsc2010/pdf/1265.pdf
20) Ouyang Ziyuan, Jiang Jingshan, Li Chunlai, Sun Huixian, Zou Yongliao, Liu Jianzhong, Liu Jianjun, Zhao Baochang, Ren Xin, Yang Jianfeng, Zhang Wenxi, Wang Jianyu, Mou Lingli, Chang Jin, Zhang Liyan, Wang Huanyu, Li Yongquan, Zhang Xiaohui, Zheng Yongchun, Wang Shijin, Bian Wei, “Preliminary Scientific Results of Chang’E-1 Lunar Orbiter: Based on Payloads Detection Data in the First Phase,” Chinese Journal of Space Science, Vol. 28, No 5, 2008, pp. 361-369, URL: http://www.cjss.ac.cn/qikan/manage/wenzhang/2008-05-01.pdf
21) Sun Huixian, Wu Ji, Dai Shuwu, Zhao Baochang, Shu Rong, Chang Jin, Wang Huanyu, Zhang Xiaohui, Ren Qiongying, Chen Xiaomin, Ouyang Ziyuan, Zou Yongliao, “Introduction to the payloads and the initial observation results of Chang’E-1,” Chinese Journal of Space Science, Vol. 28, No 5, 2008, pp. 374-384, URL: http://www.cjss.ac.cn/qikan/manage/wenzhang/2008-05-03.pdf
22) Jingshan Jiang, Zhenzhan Wang, Xiaohui Zhang, Yun Li, Xuefei Wang , Tao Wang, “China Lunar Probe Chang'e-1 Microwave Sounder, Design and some Results,” Proceedings of IGARSS (IEEE International Geoscience and Remote Sensing Symposium) 2010, Honolulu, HI, USA, July 25-30, 2010
23) “Change-2 Satellite's Camera Resolution Reaches One Meter,” Space Mart, Jan. 14, 2010, URL: http://www.spacemart.com/.../Change_2_Satellite_Camera_Resolution_Reaches_One_Meter
24) Qian Huang, Jingsong Ping, Jianguo Yan , Jianfeng Cao , Geshi Tang, Rong Shu, “Chang’E-1 Laser Altimetry Data Processing,” URL: ftp://184.108.40.206/pub/pjs/%D6%D0%B9%FA%BF%C6%D1%A7%D7%A8%BC%AD/AOGS-PS-0134_1.doc
26) G. Billig, E. Sørensen, Xi Luhua, “Chinese Lunar missions Chang’E-1 and Chang’E-2 and the ESOC support: an example of systems interoperability,” Proceedings of SpaceOps 2012, The 12th International Conference on Space Operations, Stockholm, Sweden, June 11-15, 2012
27) Barry E. DiGregorio, “Chinese Satellite Arrives at Moon - Radio tracking and control a technical and political feat,” IEEE Spectrum, Nov. 5, 2007, URL: http://spectrum.ieee.org/aerospace/space-flight/chinese-satellite-arrives-at-moon
The information compiled and edited in this article was provided by Herbert J. Kramer from his documentation of: ”Observation of the Earth and Its Environment: Survey of Missions and Sensors” (Springer Verlag) as well as many other sources after the publication of the 4th edition in 2002. - Comments and corrections to this article are always welcome for further updates. | 0.828813 | 3.256706 |
It has been proposed that fragments of an asteroid or comet impacted Earth, deposited silica-and iron-rich microspherules and other proxies across several continents, and triggered the Younger Dryas cooling episode 12,900 years ago. Although many independent groups have confirmed the impact evidence, the hypothesis remains controversial because some groups have failed to do so. We examined sediment sequences from 18 dated Younger Dryas boundary (YDB) sites across three continents (North America, Europe, and Asia), spanning 12,000 km around nearly one-third of the planet. All sites display abundant microspherules in the YDB with none or few above and below. In addition, three sites (Abu Hureyra, Syria; Melrose, Pennsylvania; and Blackville, South Carolina) display vesicular, high-temperature, siliceous scoria-like objects, or SLOs, that match the spherules geochemically. We compared YDB objects with melt products from a known cosmic impact (Meteor Crater, Arizona) and from the 1945 Trinity nuclear airburst in Socorro, New Mexico, and found that all of these high-energy events produced material that is geochemically and morphologically comparable, including: (i) high-temperature, rapidly quenched microspherules and SLOs; (ii) corundum, mullite, and suessite (Fe3,/sup>Si), a rare meteoritic mineral that forms under high temperatures; (iii) melted SiO2 glass, or lechatelierite, with flow textures (or schlieren) that form at > 2,200 °C; and (iv) particles with features indicative of high-energy interparticle collisions. These results are inconsistent with anthropogenic, volcanic, authigenic, and cosmic materials, yet consistent with cosmic ejecta, supporting the hypothesis of extraterrestrial airbursts/impacts 12,900 years ago. The wide geographic distribution of SLOs is consistent with multiple impactors. | 0.868467 | 3.557568 |
An ARI-led international team of astrophysicists have uncovered an enormous bubble current being ‘blown’ by the regular eruptions from a binary star system within the Andromeda Galaxy. This work has today been published in Nature.
Recent observations with the Liverpool Telescope and Hubble Space Telescope, supported by spectroscopy from the Gran Telescopio Canarias, and the Hobby Eberly Telescope (some of the largest astronomy facilities on Earth) discovered this enormous shell-like nebula surrounding ‘M31N 2008-12a’, a recurrent novae located in our neighbouring Andromeda Galaxy. At almost 400 lightyears across – and still growing, this shell is far bigger a typical nova remnant (usually around a lightyear in size) and even larger than most supernova remnants.
Dr Matt Darnley, lead author on the study and Reader in Time Domain Astrophysics at the ARI explains:
“Each year ‘12a’ (as we lovingly refer to it) undergoes a thermonuclear eruption on the surface of its white dwarf. These are essentially hydrogen bombs, which eject material equivalent to about the mass of the Moon in all directions at a few 1000 kilometres per second. These ejecta act like a snow plough, piling the surrounding ‘interstellar medium’ up to form the shell we observe – the outer ‘skin’ of the bubble, or the ‘super-remnant’ as we have named it.”
These new observations coupled with state-of-the-art hydrodynamic simulations (carried out at LJMU and the University of Manchester) have revealed that this vast shell is in fact the remains of not just one nova eruption but possibly millions – all from the same system.
Despite its uniqueness and staggering scale, the discovery of this super-remnant may have further significance.
Dr Matt Darnley continued: “Studying 12a and its super-remnant could help to understand how some white dwarfs grow to their critical upper mass and how they actually explode once they get there as a ‘Type Ia Supernova’. Type Ia supernovae are critical tools used to work out how the universe expands and grows.”
In a related work, also led by Matt Darnley, this team has predicted that 12a will ultimately explode as a Type Ia Supernova in less than 20,000 years – a very short time in cosmological terms.
Dr Rebekah Hounsell, second author on both studies and currently a post-doctoral researcher at the University of Pennsylvania, explains:
“Type Ia supernovae are some of the largest explosions in the Universe and our most mature cosmological probes. The recurrent novae M31N 2008-12a is the most likely SN Ia progenitor to date, and provides us with the unique opportunity to study such a system before its final demise.
Lying within our nearest galactic neighbour, Andromeda, the explosion of 12a would be one of the closest supernovae observed by telescopes. The last observed supernova within our own galaxy occurred in 1604. Although we predict that 12a will undergo its explosion in less than twenty thousand years, there is the possibility of it happening within the next decade or so.”
Dr Darnley has also posted an article to the 'Behind the Paper' channel on the naturereseach website which tells the story of this remarkable discovery. | 0.942868 | 3.989252 |
…that is, NEOCam – the Near-Earth Object (NEO) Camera, a dedicated space-based infrared NEO survey telescope. I know of no other mission concept of this sort that is in the works.
NASA selected NEOCam for phase-A (concept) study through a Discovery program competition in 2015. In 2017, NASA approved “extended-phase-A” funding for the project. NASA’s planetary defense program (full disclosure: I am a consultant to this program, and no one asked me to write this blog) wants to advance NEOCAM to phase B – design. Higher-ups at NASA appear unwilling, as yet, to advocate for the extra funding needed in the planetary defense budget to develop NEOCAM. (See below.)
As I’ve reported here before, the planetary defense community has been advocating for a space-based NEO survey telescope that will observe in the infrared for some time (see, for example, the NASA Small Bodies Assessment Group’s findings over the past few years). NASA is funding an “extended-phase-A” study of the NEOCam mission.
Why IR? As NEOCam principal investigator Amy Mainzer explains, asteroids typically reflect less than 10 percent of the sunlight that hits them in visible wavelengths. This visible light reflection is what ground-based observers can detect. The rest of the sunlight that hits an asteroid is emitted in infrared wavelengths – hence, the desirability of an IR NEO survey telescope. (Also, a space-based telescope can observe 24/7, while ground-based telescopes can only observe at night when the sky is clear.)
In 2018, NASA chief scientist Jim Green asked the Space Studies Board (SSB) of the National Academies of Sciences, Engineering, and Medicine to investigate and make recommendations about a space-based infrared NEO telescope’s capabilities. The SSB appointed an ad hoc Committee on Near Earth Object Observations in the Infrared and Visible Wavelengths, to explore the relative advantages and disadvantages of infrared (IR) and visible observations of near Earth objects (NEOs), review and describe the techniques that could be used to obtain NEO sizes from infrared observations and delineate any associated errors in determining NEO sizes, and “evaluate the strengths and weaknesses of these techniques and recommend the most valid techniques that give reproducible results with quantifiable errors.”
Over the course of three meetings, the committee was briefed by more than a dozen experts on NEO observations.The committee concluded that a space-based infrared telescope would be “more effective at detecting NEOs than visible wavelength in-space telescopes,” would “provide diameter information that visible wavelength telescopes cannot provide” (as the committee noted, “In addition to detecting NEOs and determining their orbits, it is necessary to estimate their mass to quantify their destructive potential. A NEO’s diameter is the most readily available indicator of its mass”), and would “not cost significantly more than in-space visible wavelength telescopes.”
The committee recommended that if congressionally mandated NEO detection requirements “are to be accomplished in a timely fashion (i.e., approximately 10 years), NASA should fund a dedicated space-based infrared survey telescope. Early detection is important to enable deflection of a dangerous asteroid.”
The committee also recommended that “if NASA develops a space-based infrared near Earth object (NEO) survey telescope, it should also continue to fund both short- and long-term ground-based observations to refine the orbits and physical properties of NEOs to assess the risk they might pose to Earth, and to achieve the George E. Brown, Jr. Near-Earth Object Survey Act goals.”
In congressional testimony on June 11, NASA associate administrator for space science Thomas Zurbuchen said, “NASA’s Planetary Defense Program will continue to fund the NEO Observations project and development of a space-based infrared instrument for detecting NEOs with this year’s budget request. Meanwhile, the Double Asteroid Redirection Test (DART) to demonstrate the kinetic impact technique for asteroid deflection will continue to make progress towards its planned 2021 launch.”
For me, the key words here are “with this year’s budget request.” NASA’s fiscal year 2020 budget request for planetary defense is not sufficient to complete the DART mission and advance NEOCam to development.
On April 29, as reported by Space Policy Online, NASA administrator Jim Bridenstine told attendees at the 2019 Planetary Defense Conference “that he was asked about NEOCam during a recent Senate Commerce Committee hearing,” and that “he would talk to…Zurbuchen, and the director of SMD’s planetary science division, Lori Glaze, about how to fund it. ‘But know this, we are committed to doing that,’ Bridenstine asserted.” | 0.897657 | 3.37662 |
Astronomers have discovered a star they believe has come back from the dead.
The star, located in a hazy nebula in the constellation Cassiopeia, is unlike most other stars. It shows no signs of hydrogen or helium — the two lightest elements in the universe and the final source of fuel for the nuclear reactions that power the hearts of stars. Despite this, it glows tens of thousands of times brighter than Earth's sun, and howls with a stellar wind that seems to have the strength of two stars.
Perhaps, write the authors of a new study published May 20 in the journal Nature, that's because this oddball star once was two stars — and two dead ones, at that. After some careful analysis of the star and the gassy nebula that surrounds it, the study authors determined that the star's unusual properties can be best explained by a rare phenomenon known as a double white dwarf merger. Essentially, two burnt-out stars got too close and collided, accumulated enough combined mass to start forging heavy elements again, and reignited. [The 12 Strangest Objects in the Universe]
"Such an event is extremely rare," study co-author Götz Gräfener, an astronomer at the Argelander Institute for Astronomy (AIfA) at the University of Bonn in Germany, said in a statement. "There are probably not even half a dozen such objects in the Milky Way, and we have discovered one of them."
A howling ghost
Gräfener and his colleagues came across this potential Frankenstar's monster while observing Cassiopeia with an infrared telescope. There, they discovered a ragged gas nebula with a bright star burning at its center. Strangely, the nebula didn't seem to emit any visible light, but only shined with intense infrared radiation. This, plus the nebula's distinct lack of hydrogen and helium gas suggested that the mystery star at the center of the nebula was a white dwarf — the shriveled, crystalline husk of a once-mighty star that has run out of fuel.
However, if the star was dead, it certainly wasn't acting the part. Quite the contrary — it seemed to be working its fiery butt off burning something, possibly oxygen and neon. Further observations showed that the star shined with infrared light 40,000 times as bright as Earth's sun, and belched out solar winds that sped through space at about 36 million mph (58 million km/h) — far stronger than a single white dwarf should be capable of, the researchers wrote.
A dance of the dead
Something, it seemed, had reanimated the dead star. The team ran some simulations, and found that all the star's surprising properties — including its exceptional wind — fit with a double white dwarf merger event.
"We assume that two white dwarfs formed there in close proximity many billions of years ago," study co-author Norbert Langer, also of AIfA, said in the statement. "They circled around each other, creating exotic distortions of space-time, called gravitational waves."
While creating these waves, the dead stars gradually lost energy and drifted closer and closer together. Eventually, the researchers hypothesized, the dwarfs collided, merging into a single star with a great enough mass to start forging heavy elements again. The fires were rekindled, and two dead stars were reanimated as one living one.
It sounds unlikely, but it's not unheard of in our weird universe. A 2018 study in the Monthly Notices of The Royal Astronomical Societypredicted that as many as 11% of all white dwarfs may have merged with another white dwarf at some point in their history. However, according to the authors of the new study, only a handful of those are likely to exist in the Milky Way.
Finding one is sort of like winning an astrophysical lottery — except, instead of getting a big, six-figure check, the winners get a supernova. That's the likeliest fate for this revived star, the researchers wrote, as it quickly burns through its fuel reserve. Within a few thousand years, the star will probably be running on empty again, and will ultimately collapse under its own gravity. The star will blast away its outer shell in a dazzling explosion, crunch down into a hyper-dense neutron star and, finally, return to the cosmic graveyard.
- 13 Incredibly Lucky Earth Facts
- 15 Amazing Images of Stars
- 9 Strange Excuses for Why We Haven't Met Aliens Yet
Originally published on Live Science. | 0.889719 | 3.890443 |
This figure shows a model of the orbit of Comet Shoemaker-Levy 9 after it was captured by Jupiter.
Click on image for full size
The trajectory of Comet Shoemaker-Levy 9 over time
Mathematical theory suggests that comet Shoemaker-Levy 9 was likely a short-period comet which was captured into orbit around Jupiter in 1929 and began to execute the trajectory plotted in this diagram. This trajectory ended with a collision of the comet with Jupiter itself.
You might also be interested in:
Comets are observed to go around the sun in a long period of time or a short period of time. Thus they are named "long-period" or "short-period" comets. One group of short-period comets, called the Jupiter...more
Scientists have learned a great deal from the crash of comet Shoemaker-Levy 9. Scientists traced the orbit of the comet backwards in time to guess its origin. This calculation, along with the discovery...more
Hale-Bopp continues to offer surprises as astronomers study the comet. Using the Hubble Space Telescope and the International Ultraviolet Explorer, astronomers have found that there are distinctly different...more
Six spacecraft flew by Halley's comet in 1986. There were two spacecraft launched from Japan, Suisei and Sakigake, and two from the Soviet Union, Vega 1 & 2. One spacecraft, ICE, from the United States...more
Mathematical theory suggests that comet Shoemaker-Levy 9 was likely a short-period comet which was captured into orbit around Jupiter in 1929 and began to execute the trajectory plotted in this diagram....more
As the ices of the comet nucleus evaporate, they expand rapidly into a large cloud around the central part of the comet. This cloud, called the coma, is the atmosphere of the comet and can extend for millions...more
When evaporation begins, the gas is propelled from the nucleus at supersonic speed (depicted by arrows in the figure). Because of the low gravity in space, this means that the molecules from the nucleus...more | 0.836661 | 3.221454 |
Unlike Earth’s atmosphere, Jupiter’s ‘sky’ hosts magnificent shades of orange, white, brown and blue.
Atmospheres can be all different colours, depending on what's in them.
Saturn is one of a few planets in our solar system surrounded by rings.
Vadim Sadovski/Shutterstock/Elements of this image furnished by NASA
We're not sure how the rings work or how they formed, but there are a few theories.
A planet-forming disk made from rock and gas surrounds a young star.
NASA/JPL-Caltech/SwRI/MSSS/ Gerald Eichstädt /Seán Doran
Why isn't there an endless variety of planets in the universe? An astrophysicist explains why planets only come in two flavors.
Your calendar dates back to Babylonian times.
The Babylonians' calendar was passed down from civilization to civilization.
Untitled. 2015. Pen and Ink on Paper. 60 x 71 cm.
Ernst van der Wal
Beautiful art can provide hope and healing.
On June 5-6, 2012, NASA’s Solar Dynamics Observatory collected images of one of the rarest predictable solar events: the transit of Venus across the face of the Sun.
This hot, acidic neighbor with its surface veiled in thick clouds hasn't benefited from the attention showered on Mars and the Moon. But Venus may offer insights into the fate of the Earth.
When it was young, the Sun spun fast – very fast. It would do one rotation in a just one or two Earth days.
Yes, the Sun absolutely spins. In fact, everything in the universe spins. Some things spin faster than the Sun, some are slower and some things spin 'backwards'.
The bright spot in the centre of the image is a new planet forming.
Valentin Christiaens et al./ ESO
Astronomers have found the first observational evidence for a disc of material around a giant young planet at a distant star. It's a place they think moons can form.
Searching for planets around nearby stars is like searching for a needle in a field of haystacks.
Science is full of surprises. While searching for planets orbiting nearby stars, researchers stumbled across the remains of a star that once outshone the Sun.
Distant stars above the ruins of Sherborne Old Castle, in the UK.
When you look up at the vastness of space you can see hundreds, thousands and even millions of years into the past.
Nobody knows for sure - but it’s possible.
There are probably more than a million planets in the universe for every single grain of sand on Earth. That's a lot of planets. My guess is that there probably is life elsewhere in the Universe.
Once people get there, Mars will be contaminated with Earth life.
NASA/Pat Rawlings, SAIC
NASA's InSight Mars lander touches down Nov. 26, part of a careful robotic approach to exploring the red planet. But human exploration of Mars will inevitably introduce Earth life. Are you OK with that?
The Sun is a star – but it’s not the only one.
NASA/GSFC/Solar Dynamics Observatory
There are lots of places where it's much, much hotter than the Sun. And the amazing thing is that this heat also makes new atoms - tiny particles that have made their way long ago from stars to us.
Pluto’s ghoulish cousin, 2015 TG387, lurks in the distant reaches of our own Solar System.
Illustration by Roberto Molar Candanosa and Scott Sheppard, courtesy of Carnegie Institution for Science.
Whether you call it Planet X or Planet Nine, talk of another planet lurking in our Solar system won't go away. So what does the discovery of a new object – nicknamed "The Goblin" – add to the debate?
There are plans to cause HAVOC on Venus.
The upper atmosphere of Venus is the most Earth-like extra-terrestrial location in the solar system. It could even host life.
Enjoying the planets lined up in a row.
The five planets visible to the naked eye since ancient times are putting on a dazzling display this month, in a night-sky dance along with the Moon.
Pluto in enhanced color, to illustrate differences in the composition and texture of its surface.
NASA / Johns Hopkins University Applied Physics Laboratory / Southwest Research Institute
Pluto has a density between that of rock and ice – so that immediately suggests the dwarf planet is made of a mix of both. But how do we know?
The other galaxies are there, but they are hiding a very long way away.
We are in the Milky Way. If you travelled on an extremely fast spaceship for more than two million years, you would reach our neighbour, the Andromeda galaxy. All other galaxies are even further away.
Venus shines bright in the sky above Victoria.
Flickr/Indigo Skies Photography
The planets we can see in the sky were known to the ancient Greeks as 'wandering stars'. But they appeared much earlier in the stories and traditions of Australia's Indigenous people.
The colorful cloud belts dominate Jupiter’s southern hemisphere in this image captured by NASA’s Juno spacecraft.
NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill
Jupiter's bands are one of its most striking features – and can be seen from Earth – but they only go so deep within the giant planet. Now scientists think they know why. | 0.833919 | 3.50242 |
Another approach to the search for extrasolar life is the detection of radio signals sent by other civilizations. The modest attempts to pick up any such signals have generated much interest, speculation, and debate. It is true that the most powerful radio telescopes on Earth could receive signals from similar telescopes, aimed directly at Earth, from any other spot in the galaxy. Considerable thought has also been devoted to what wavelengths would be used for communication and what types of information might be sent. The laws of physics and radio propagation in the galaxy suggest that the best wavelengths are in what is known as the "water hole" near 20 centimeters. This activity is commonly referred to as SETI, the Search for ExtraTerrestrial Intelligence. In 1990, NASA funded a modest SETI effort, but its budget was cut after only a few years, before the program got seriously under way. Senator Proxmire gave SETI one of his famed Golden Fleece awards and fumed "not a penny for SETI." Others in influential positions were concerned about ridicule of a program that might run for decades or even millennia seeking faint radio signals from other civilizations, and public funding for SETI searches has been very limited. (Just as it is a problem on Earth, funding would probably be a critical factor on other worlds too. On Earth, in our most economically prosperous times, we cannot even listen. Sending the signals would be much more complex and costly.)
Unfortunately, it is very difficult to know if SETI is an effective use of resources. If the Rare Earth Hypothesis is correct, then it clearly is a futile ef fort. If life is common and it commonly leads to the evolution of intelligent creatures that have long, prosperous planetary tenures, then it is possible that enlightened aliens might be beaming signals off into space. A key factor in deciding whether SETI makes sense involves the lifetime of civilizations with radio technology. Does such a civilization last only centuries before nuclear war, starvation, or some other calamity causes its decline? Or does it last forever? In the most optimistic minds, "Star Trek" societies might populate the stars. But even if they do, it is a real question whether any of them would or could beam enormous amounts of radio power into space to potential audiences that are prevented by the vast interstellar distances from ever returning the message in a timely manner. There probably are other civilizations in the galaxy that have radio telescopes, but the vast numbers of stars and the vast distances involved are barriers that may always keep SETI more an experiment of the imagination than a large-scale scientific endeavor. An exception might be made for the limited number of nearby stars that have planetary systems. If some of these are found to have Earth-like planets with atmospheric compositions indicative of life, then the public might support either sending signals or listening. And, of course, even though we do not intentionally beam radio messages to nearby stars, Earth is a potent transmitter of radio power emanating from radar, television stations, and other sources.
Was this article helpful? | 0.838835 | 3.253944 |
The answers above are correct; I'd just like to contribute a more "talky", less technical (and therefore also less accurate, I must note) explanation. (High points are in bold.)
If I understand correctly, what you're unclear on is why the teacher says the energy required is the "same" to get going from Earth and to stop at A Centauri. If that's not the part you want to know, correct me.
Assuming it is what you want to know: The friction of atmosphere and the pull of Earth's gravity are quite insignificant compared to the energy required to accelerate to 0.99c, and it seems to me your teacher was speaking in conceptual terms, rather than giving a precise calculation. That is, I don't think they wanted to say the energies for acceleration and deceleration are exactly equal; rather, that they are "broadly equal". In that sense, they were essentially correct.
What your teacher probably didn't consider (possibly because they didn't want to complicate the example further) is that if the ship needs to carry fuel (such as for a rocket), it becomes lighter as it continues to burn that fuel, and therefore the energy required to achieve a particular acceleration / deceleration (they are the same; deceleration is just acceleration in the opposite direction, put very simply) decreases, as energy required for constant acceleration is a function of the mass of the object to be accelerated.
Thus, the ship would actually require less energy to decelerate than it had required to accelerate, if it is powered by a rocket engine or some other engine that consumes significant amounts of fuel. A hypothetical nuclear- or fusion-powered engine, on the other hand, would probably consume fuel much slower (because it gets much more energy out of burning a given amount of fuel, therefore needs to burn less fuel mass overall to make the trip), and thus the mass of the ship would change less over the trip and therefore also the amounts of energy required to accelerate and to decelerate would be closer to equal; possibly much closer than with a rocket.
Getting back to planetary gravity and atmospheric drag: In an Earth-like atmosphere, the amount of drag produced (especially over the mere ~100 km of thickness of the atmosphere the ship needs to go through from surface to space) is tiny relative to the energy needed to get to 0.99c.
Planetary gravity has a somewhat larger effect, but again, negligible compared to the energy needed to accelerate to the final speed. And as with the atmosphere, the gravity becomes weaker the farther you get from the planet, attenuating to basically nothing within a few thousand kilometers (which is practically zero distance relative to a trip about 40,000,000,000,000 km long).
Finally, unless your teacher stated otherwise, there is no reason to assume that the destination is not an Earth-like planet. (We know there are none like that in the A Centauri system IRL, but then, this was a theoretical example.) If it is, then both gravity and atmospheric drag at the destination will be comparable to Earth's, making the situation symmetrical in this regard.
Realistically, if the destination is a rocky planet roughly the size of Earth (such as Alpha Centauri Cb, aka Proxima Centauri b), then it will have more or less Earth's gravity. Even if it has no atmosphere (we currently cannot tell about A Cen Cb), the gravitational pull is much larger than the atmospheric friction for Earth-like conditions; i.e., the situation would remain at least mostly symmetrical.
I hope that clears it up for you a bit.
EDIT: About relativistic effects: They're not really important for your question. They would of course be present, but as they are symmetrical for the acceleration and deceleration phase of the trip, and increase in magnitude as velocity approaches c, they have no real impact on the potential asymmetry of energy requirements for "takeoff" and "landing", as in both phases the ship will be going incredibly slowly compared to the speed of light.
As to what the "relativistic effects" actually do: Put very very simply, the energy required to achieve the same acceleration for the ship increases as the ship's speed approaches c; very close to c, the increases become huge. (They would be infinite at c, which is, very simply put, the reason no massive object can actually reach c, only approach it to an arbitrarily close fraction.) For 0.99c, the ship would accelerate as if its mass were about 7 times greater than it "actually" is.
The reason: Because the ship's mass actually would be higher, at least from the point of view of an observer relative to whom the ship were doing 0.99c. Energy is ultimately the same thing as mass, and a moving object has kinetic energy; the kinetic energy at 0.99c is so high that it "weighs" the additional ~6 times the ship's "normal" mass (i.e., mass at rest).
So, in this case, relativistic effects increase the overall energy required to make the trip, but since they are (effectively) absent at the "takeoff" and "landing" phases that your question centers on, they only serve to further decrease the relative significance of the takeoff and landing energies as a proportion of the total energy cost of the trip.
Make no mistake though, even discounting relativistic effects (which is unphysical, i.e., a thought experiment only) the energy for leaving / entering a planetary atmosphere / gravity well is still insignificant compared to the energy needed for the rest of the trip to 0.99c and back.
This is about as good an explanation as I can give without explaining the basic concept of special relativity in full, which is well beyond the scope of this post. | 0.83177 | 3.635243 |
Astronomers May Have Found Our Next Home: Two ‘Nearby’ Earth-Sized Planets Discovered That Might Support Life
Since we seem to be using up our current home, Earth, at a rather alarming pace, astronomers and scientists have been hard at work looking for a new place that we humans can call home at some point in the (distant?) future.
BONUS: These two newly-discovered, possibly habitable planets are only 12.5 light years from Earth! (That’s 73,483,000,000,000 miles in American. Easy-peasy.)
The two planets orbit Teegarden’s star, the 24th closest star to the sun, a red dwarf star that was only just discovered in 2003. These two new potential exoplanets were spotted this month using an instrument called CARMENES (Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Échelle Spectrographs) during a survey being conducted for exoplanets.
According to CNN…
Research on the planets and their sun, published in the journal Astronomy and Astrophysics, reveals Teegarden’s star seems to be stable, without large solar flares or other violent activity that could threaten the potential for life on the two promising candidates.
If the estimated orbit and rotation speeds are accurate, and there are no unexpected factors in the solar system to disrupt astronomers’ other calculations, Teegarden’s two planets could host rocky environments and flowing, puddling water. However, all of these assumptions are estimates, and not actually firsthand observations — for now. The Teegardan planet discoveries are part of a larger effort by astronomers to locate potentially life-supporting planets in order to refine observation and research technology, like high-powered telescopes, to learn even more about them.
A press release announcing the discovery states…
The innermost planet Teegarden b has a 60% chance of having a temperate surface environment, that is temperatures between 0° to 50°C. Surface temperature should be closer to 28°C (2) assuming a similar terrestrial atmosphere but could be higher or lower depending on its composition.
Teegarden b is the planet with the highest Earth Similarity Index (ESI) discovered so far, which means that it has the closest mass and insolation to terrestrial values, albeit we only know its minimum mass (3). However, how this translates to habitability depends on many other factors, especially since this planet orbits a red dwarf star.
In the immortal words of one Lloyd Christmas, “So you’re telling me there’s a chance?”
Yes, Lloyd. Yes, there is.
Check out a simulated tour from our solar system to Teegarden’s star system created by the University of Göttingen and another video that goes further in depth on the discovery below. | 0.86727 | 3.490796 |
Ever since humans first looked at the stars and contemplated the existence of other worlds we have asked the question, “Are we alone in the Universe?”
Now, with the discovery of thousands of exoplanets orbiting alien stars and a new generation of huge, Earth- and space-based telescopes coming online, this generation might just be the first that is able to answer that question once and for all.
Lisa Kaltenegger, director of the Carl Sagan Institute at Cornell University, is in town to give two lectures at the Adler Planetarium on the search for alien life. The lectures will be simulcast to museums and planetariums around the world.
Below, an edited Q&A with Kaltenegger.
Talk a little bit about the work of the Carl Sagan Institute, where you serve as director. What is the mission?
The Carl Sagan Institute is looking and trying to find signs of life inside the Solar System and out. So we are at this connection between planetary science and everything we know about our Solar System and trying to use that to find signs of life on planets that are not orbiting our sun but others. It is an interdisciplinary institute so currently we have about 27 faculty in 14 different departments and that means we have biologists, chemists, astronomers, physicists, engineers and also science communication professors. And the idea is to generate a forensic tool kit to spot life, if it exists, inside the Solar System and outside.
In terms of the ways that we can search for and hopefully one day identify signs of extraterrestrial life, what are the most promising tools to help you identify those signs?
The most promising method we have right now is when we collect the light from a planet outside our Solar System orbiting an alien sun we can actually see what the atmosphere or the air there is made out of. And if you look at the Earth and you see oxygen with gas like methane at the same time, that indicates that something is producing oxygen in huge amounts and that tells you that there is life on our planet. So we are looking for the exact same thing on another planet. That’s our most robust bio-signature that we have so far—looking at the Earth as a Rosetta Stone and taking what we know of our home planet and then applying it to other ones.
And then we have other interesting bio-signatures—and what we mean by that is that when you look at life on Earth it is not just the plants you see outside on the window, but it’s also some kind of extreme life forms or some algae, things like that. So the color of the planet—and that is a harder measurement to take—but the color of a planet can also indicate what kind of life is on that planet. Of course, under the assumption that life is pretty similar to what we have here, and that’s another whole, really interesting can of worms that you open up. Other people at the Carl Sagan Institute are actually looking at that and asking if you have a place like Titan (Saturn’s largest moon) that is incredibly cold and where you don’t have a water ocean but a methane ocean, how would life have to be to survive there? And so we have a biochemist, with an astronomer and an engineer to try to figure out how that could work.
What do scientists now believe are the prerequisites of life? Water?
I think the current scientific thought is that generally we want to look for water because on the Earth we don’t know of any kind of life that doesn’t need water. That’s our first clue. So we are saying that if we look at all the life on Earth and all of it needs water so that’s what we are looking for. However, having said that, we do understand that life could also be different. But of course we haven’t found life. We haven’t found life on Titan. So we haven’t found any life that doesn’t need water. We can speculate what life could be like in these other conditions, what it could look like ... so in a way it’s in the realm of being a little bit like science fiction but having a lot of science in the fiction.
We are trying to figure out what alternative types of life could look like so that we don’t miss it. So we are trying to make a better tool kit so that it doesn’t just spot the obvious kinds of life but that it can also spot things that are may be not as obvious. The big idea is to try to figure out what kind of environment you need to get life in the first place. We don’t know that. We know roughly what the conditions were on the Earth but it could be that life could actually start under very different conditions or maybe it needs exactly the conditions of the Earth. And that is why we are looking.
It seems like there has never been a better time to be in your line of work. The James Webb Space Telescope is going to be coming online which will be able to surpass even what Hubble was able to achieve. There’s also the discovery of thousands of exoplanets
We are nearly at 4,000.
Remind me. When did we discover the first exoplanet?
The first exoplanet detected around a satellite star was 1995. But the first exoplanet to be a rock around a satellite star was in 2013 with Kepler 62e and f—but people will debate that. From everything that we know, Kepler 62e and f should be smaller enough that they could be (rocky planets) and they are the right distance from there star.
With the next generation of telescopes are we going to be able to directly image planets around other stars?
Absolutely. Basically, with this 40-meter telescope (the Very Large Telescope or VLT being built in Chile) we will be able to actually image planets like the one next to our neighboring star. The planet around Proxima B will be within the limit of what we can image with the VLT. But even if we can’t, even if the light from the star is overshadowing and the planet and the star are too close together, we can actually read what is in the air of that planet by looking at the planet as it transits its star—when it goes in front of its star and part of the light gets filtered through the planet’s atmosphere.
What I usually say is that light travels through the universe and when sunlight hits your hand it’s warm—so light is energy. And so if that energy hits a molecule in the air of another world like oxygen or methane or water—and that specific energy, that color of light, is then missing in the light that gets to our telescope. And by seeing what is missing in the light, I can tell you what molecule it hit in the atmosphere of another planet before getting to me. And this is how I can read the spectral fingerprint of the planet. The missing light is telling me what that light encountered as it travelled to me.
So far we have talked about life. How about intelligent life? What are the most promising ways of detecting signs of intelligent life?
I think the most promising way to do this is to go one step forward. We are looking at these gases in the atmosphere. And with bigger telescopes we can find gases that are less abundant like technological gases. For example Freon that is coming out of our fridges. I think that is probably a straightforward way to go. To finds signs of technologies and if there is enough technology signs of intelligent civilization.
The other way we can do this of course is we can look for radio signals. The trade-off that you are making (if you focus on radio signals) is that we have used radio for a bit less than 100 years and we are already actually moving away from radio. We are using Bluetooth, internet cables, etc., so it seems to be an incredibly short time in the evolution of a species—if they are like ours—where you actually use significant radio signals. It’s very interesting approach. I wish my colleagues the best of luck because I think it’s also courageous. And there’s no dispute, if a signal were to come in that says ‘Hello, my name is blah blah blah’—there’s no dispute, you would say that’s a sign of intelligent life. But if you do it via gases, my view is that you can find life and then you can build bigger telescopes to learn more about it whether or not they have graduated to the stage of being technology ready.
Your talk at the Adler Planetarium, which is being simulcast to museums and planetariums all over the world, is about the search for life in the universe. The question of whether we are alone in the universe is really an age-old question that has existed since we first looked up at the stars. Do you think we will finally be the generation that is able to answer that question definitely?
Absolutely! If life exists everywhere it can it’s just going to be a couple of years out for us to figure that out. If life is very rare, or it doesn’t produce a unique signature, and what I mean by that is the Earth had life for about 3.5 billion years at least. But in the first 1.5 to 2 billion years it produced methane and carbon dioxide—the problem is that those signatures are not unique because they could also be produced geologically. So if life exists everywhere it can we will be able to spot it with the telescopes we are building.
I don’t think people always appreciate the amazing age of scientific discovery that we are living in.
I completely agree. There are so many bad things in the news but then we are also living in this golden age of discovery where we are figuring out our place in the universe.
If you look at history books in the future, there’s going to be the time before humans learned they were not alone, and after. And we might just live exactly in that generation—there are a lot of indications that say we are because of the discovery of so many exoplanets—we might just be the generation that actually figures that out. That is amazing.
Oct. 26: Viewers on four continents will watch a virtual presentation hosted by Adler Planetarium in early November to learn about the possibility of life on other planets.
Oct. 17: An international team that includes Chicago astronomers recently observed the collision of two high-density neutron stars, a historic discovery that confirms decades of scientific work.
Aug. 25, 2016: A planet that could potentially host life has been discovered orbiting Proxima Centauri, the star closest to our solar system, according to a report published Wednesday by more than 30 international scientists. | 0.890641 | 3.379219 |
A brief and unusual flash spotted in the night sky on June 16, 2018, puzzled astronomers and astrophysicists across the globe. The event - called AT2018cow and nicknamed "the Cow" after the coincidental final letters in its official name - is unlike any celestial outburst ever seen before, prompting multiple theories about its source.
Over three days, the Cow produced a sudden explosion of light at least 10 times brighter than a typical supernova, and then it faded over the next few months. This unusual event occurred inside or near a star-forming galaxy known as CGCG 137-068, located about 200 million light-years away in the constellation Hercules. The Cow was first observed by the NASA-funded Asteroid Terrestrial-impact Last Alert System telescope in Hawaii.
So exactly what is the Cow? Using data from multiple NASA missions, including the Neil Gehrels Swift Observatory and the Nuclear Spectroscopic Telescope Array (NuSTAR), two groups are publishing papers that provide possible explanations for the Cow's origins. One paper argues that the Cow is a monster black hole shredding a passing star. The second paper hypothesizes that it is a supernova - a stellar explosion - that gave birth to a black hole or a neutron star.
Researchers from both teams shared their interpretations at a panel discussion on Thursday, Jan. 10, at the 233rd American Astronomical Society meeting in Seattle.
Watch what scientists think happens when a black hole tears apart a hot, dense white dwarf star. A team working with observations from NASA's Neil Gehrels Swift Observatory suggests this process explains a mysterious outburst known as AT2018cow, or "the Cow." Credit: NASA's Goddard Space Flight Center
A Black Hole Shredding a Compact Star?
One potential explanation of the Cow is that a star has been ripped apart in what astronomers call a "tidal disruption event." Just as the Moon's gravity causes Earth's oceans to bulge, creating tides, a black hole has a similar but more powerful effect on an approaching star, ultimately breaking it apart into a stream of gas. The tail of the gas stream is flung out of the system, but the leading edge swings back around the black hole, collides with itself and creates an elliptical cloud of material. According to one research team using data spanning from infrared radiation to gamma rays from Swift and other observatories, this transformation best explains the Cow's behavior.
"We've never seen anything exactly like the Cow, which is very exciting," said Amy Lien, an assistant research scientist at the University of Maryland, Baltimore County and NASA's Goddard Space Flight Center in Greenbelt, Maryland. "We think a tidal disruption created the quick, really unusual burst of light at the beginning of the event and best explains Swift's multiwavelength observations as it faded over the next few months."
Lien and her colleagues think the shredded star was a white dwarf - a hot, roughly Earth-sized stellar remnant marking the final state of stars like our Sun. They also calculated that the black hole's mass ranges from 100,000 to 1 million times the Sun's, almost as large as the central black hole of its host galaxy. It's unusual to see black holes of this scale outside the center of a galaxy, but it's possible the Cow occurred in a nearby satellite galaxy or a globular star cluster whose older stellar populations could have a higher proportion of white dwarfs than average galaxies.
A paper describing the findings, co-authored by Lien, will appear in a future edition of the journal Monthly Notices of the Royal Astronomical Society.
"The Cow produced a large cloud of debris in a very short time," said lead author Paul Kuin, an astrophysicist at University College London (UCL). "Shredding a bigger star to produce a cloud like this would take a bigger black hole, result in a slower brightness increase and take longer for the debris to be consumed."
Or a New View of a Supernova?
A different team of scientists was able to gather data on the Cow over an even broader range of wavelengths, spanning from radio waves to gamma rays. Based on those observations, the team suggests that a supernova could be the source of the Cow. When a massive star dies, it explodes as a supernova and leaves behind either a black hole or an incredibly dense object called a neutron star. The Cow could represent the birth of one of these stellar remnants.
"We saw features in the Cow that we have never seen before in a transient, or rapidly changing, object," said Raffaella Margutti, an astrophysicist at Northwestern University in Evanston, Illinois, and lead author of a study about the Cow to be published in The Astrophysical Journal. "Our team used high-energy X-ray data to show that the Cow has characteristics similar to a compact body like a black hole or neutron star consuming material. But based on what we saw in other wavelengths, we think this was a special case and that we may have observed - for the first time - the creation of a compact body in real time."
Margutti's team analyzed data from multiple observatories, including NASA's NuSTAR, ESA's (the European Space Agency's) XMM-Newton and INTEGRAL satellites, and the National Science Foundation's Very Large Array. The team proposes that the bright optical and ultraviolet flash from the Cow signaled a supernova and that the X-ray emissions that followed shortly after the outburst arose from gas radiating energy as it fell onto a compact object.
Typically, a supernova's expanding debris cloud blocks any light from the compact object at the center of the blast. Because of the X-ray emissions, Margutti and her colleagues suggest the original star in this scenario may have been relatively low in mass, producing a comparatively thinner debris cloud through which X-rays from the central source could escape.
"If we're seeing the birth of a compact object in real time, this could be the start of a new chapter in our understanding of stellar evolution," said Brian Grefenstette, a NuSTAR instrument scientist at Caltech and a co-author of Margutti's paper. "We looked at this object with many different observatories, and of course the more windows you open onto an object, the more you can learn about it. But, as we're seeing with the Cow, that doesn't necessarily mean the solution will be simple."
NuSTAR is a Small Explorer mission led by Caltech and managed by JPL for NASA's Science Mission Directorate in Washington. NuSTAR was developed in partnership with the Danish Technical University and the Italian Space Agency (ASI). The spacecraft was built by Orbital Sciences Corporation in Dulles, Virginia. NuSTAR's mission operations center is at UC Berkeley, and the official data archive is at NASA's High Energy Astrophysics Science Archive Research Center. ASI provides the mission's ground station and a mirror archive. JPL is managed by Caltech for NASA.
NASA's Goddard Space Flight Center manages the Swift mission in collaboration with Penn State in University Park, the Los Alamos National Laboratory in New Mexico and Northrop Grumman Innovation Systems in Dulles, Virginia. Other partners include the University of Leicester and Mullard Space Science Laboratory of the University College London in the United Kingdom, Brera Observatory and ASI.
News Media ContactCalla Cofield
Jet Propulsion Laboratory, Pasadena, Calif.
By Jeanette Kazmierczak
NASA's Goddard Space Flight Center, Greenbelt, Md. | 0.898201 | 3.872569 |
Comets are known to have a temper. As they swoop in from the outer edges of our solar system, these icy bodies begin spewing gas and dust as they venture closer to the sun. Their luminous outbursts can result in spectacular sights that grace the night sky for days, weeks or even months.
But comets aren’t born that way, and their pathway from their original formation location toward the inner solar system has been debated for a long time. Comets are of great interest to planetary scientists because they are likely to be the most pristine remnants of material left over from the birth of our solar system.
In a study published in the Astrophysical Journal Letters, a team of researchers including Kathryn Volk and Walter Harris at the University of Arizona Lunar and Planetary Laboratory report the discovery of an orbital region just beyond Jupiter that acts as a “comet gateway.” This pathway funnels icy bodies called centaurs from the region of the giant planets—Jupiter, Saturn, Uranus and Neptune—into the inner solar system, where they can become regular visitors of Earth’s neighborhood, cosmically speaking.
Roughly shaped like an imaginary donut encircling the area, the gateway was uncovered as part of a simulation of centaurs, small icy bodies traveling on chaotic orbits between Jupiter and Neptune.
Centaurs: Icy Rogues on Haphazard Trails
Centaurs are believed to originate in the Kuiper belt, a region populated by icy objects beyond Neptune and extending out to about 50 Astronomical Units, or 50 times the average distance between the sun and the Earth. Close encounters with Neptune nudge some of them onto inward trajectories, and they become centaurs, which act as the source population of the roughly 1,000 short-period comets that zip around the inner solar system. These comets, also known as Jupiter-family comets, or JFCs, include comets visited by spacecraft missions such as Tempel 1 (Deep Impact), Wild 2 (Stardust) and 67P/Churyumov-Gerasimenko (Rosetta).
“The chaotic nature of their orbits obscures the exact pathways these centaurs follow on their way to becoming JFCs,” said Volk, a co-author on the paper and an associate staff scientist who studies Kuiper belt objects, planetary dynamics and planets outside our solar system. “This makes it difficult to figure out where exactly they came from and where they might go in the future.”
Jostled by the gravitational fields of several nearby giant planets—Jupiter, Saturn and Neptune—centaurs don’t tend to stick around, making for a high-turnover neighborhood, Harris said.
“They rattle around for a few million years, perhaps a few tens of millions of years, but none of them were there even close to the time when the solar system formed,” he said.
“We know of 300 centaurs that we can see through telescopes, but that’s only the tip of an iceberg of an estimated 10 million such objects,” Harris added.
“Most centaurs we know of weren’t discovered until CCD’s became available, plus you need the help of a computer to search for these objects,” Volk said. “But there is a large bias in observations because the small objects simply aren’t bright enough to be detected.”
Where Comets Go to Die
Every pass around the sun inflicts more wear and tear on a comet until it eventually breaks apart, has a close encounter with a planet that ejects it from the inner solar system, or its volatiles—mostly gas and water—are depleted.
“Often, much of the dust remains and coats the surface, so the comet doesn’t heat up much anymore and it goes dormant,” Harris said.
By some mechanism, a steady supply of “baby comets” must replace those that have run their course, “but until now, we didn’t know where they were coming from,” he added.
To better understand how centaurs become JFCs, the research team focused on creating computer simulations that could reproduce the orbit of 29P/Schwassmann-Wachmann 1, or SW1, a centaur discovered in 1927 and thought to be about 40 miles across.
SW1 has long puzzled astronomers with its high activity and frequent explosive outbursts despite the fact that is too far from the sun for water ice to melt. Both its orbit and activity put SW1 in an evolutionary middle ground between the other centaurs and the JFCs, and the original goal of the investigation was to explore whether SW1’s current circumstances were consistent with the orbital progression of the other centaurs.
To accomplish this, the team modeled the evolution of bodies from beyond Neptune’s orbit, through the giant planet’s region and inside Jupiter’s orbit.
“The results of our simulation included several findings that fundamentally alter our understanding of comet evolution,” Harris said. “Of the new centaurs tracked by the simulation, more than one in five were found to enter an orbit similar to that of SW1 at some point in their evolution.”
In other words, even though SW1 appears to be the only large centaur of the handful of objects currently known to occupy the “cradle of comets,” it is not the outlier it was thought to be, but rather ordinary for a centaur, according to Harris.
In addition to the commonplace nature of SW1’s orbit, the simulations led to an even more surprising discovery.
“Centaurs passing through this region are the source of more than two-thirds of all Jupiter-family comets,” Harris said, “making this the primary gateway through which these comets are produced.”
“Historically, our assumption has been that the region around Jupiter is fairly empty, cleaned out by the giant planet’s gravity, but our results teach us that there is a region that is constantly being fed,” Volk says.
This constant source of new objects may help explain the surprising rate of icy body impacts with Jupiter, such as the famous Shoemaker-Levy 9 event in 1994.
A Comet Worthy of Worship
Based on estimates and calculations of the number and size of objects entering, inhabiting and leaving the gateway region, the study predicted it should sustain an average population of about 1,000 Jupiter-family objects, not too far off the 500 that astronomers have found so far.
The results also showed that the gateway region triggers a rapid transition: once a centaur has entered it, it is very likely to become a JFC within a few thousand years, a blink of an eye in solar system timeframes.
The calculations suggest that an object of SW1’s size should enter the region every 50,000 years, making it likely that SW1 is the largest centaur to begin this transition in all of recorded human history, Harris and Volk suggest. In fact, SW1 could be on its way to becoming a “super comet” within a few thousand years.
Comparable in size and activity to comet Hale-Bopp, one of the brightest comets of the 20th century, SW1 has a 70% chance of becoming what could potentially amount to the most spectacular comet humankind has ever seen, the authors suggest.
“Our descendants could be seeing a comet 10 to 100 times more active than the famous Halley comet,” Harris said, “except SW1 would be returning every six to 10 years instead of every 75.”
“If there had been a comet this bright in the last 10,000 years we would know about it,” Volk said.
“We take this as strong evidence that a similar event has not happened at least since then,” Harris said, “because ancient civilizations would not only have recorded the comet, they may have worshiped it!”
More information: G. Sarid et al. 29P/Schwassmann–Wachmann 1, A Centaur in the Gateway to the Jupiter-family Comets. The Astrophysical Journal (2019). DOI: 10.3847/2041-8213/ab3fb3
Image: A comet worthy of worship: An artist illustration of what centaur SW1 would look like if it became an inner solar system Jupiter-Family comet at a distance of 0.2 AU (19 million miles) from Earth. The moon is in the upper right part of the frame for scale.
Credit: Heather Roper | 0.89579 | 4.080276 |
Jupiter Magnetosphere - CESAR
Jupiter Magnetosphere Study
The radiation at radio wavelengths coming from Jupiter is thermal emission of the planet plus the non-thermal emission coming from high energy electrons trapped in its surrounding magnetosphere. Due to a misalignment of spin and magnetic axes of Jupiter, the non-thermal intensity varies with the rotation of the planet. Since the rotation period is about 10 hours, systematic observations will allow students to measure the periodic power variation also known as Beaming Curve.
Jupiter magnetosphere. Cassini (NASA)
The project objective is to seek non-thermal variability caused by other causes unrelated to the Jovian magnetic field, such as variations in solar activity or possible changes induced in the planet for great asteroid and comet impacts.
Jovian radio emission variation, at 1.43 GHz, over a rotation period (T.D. Carr y S. Gulkis , 1969). | 0.883299 | 3.334768 |
Gibbous ♋ Cancer
Moon phase on 13 November 2049 Saturday is Waning Gibbous, 18 days old Moon is in Cancer.Share this page: twitter facebook linkedin
Previous main lunar phase is the Full Moon before 3 days on 9 November 2049 at 15:38.
Moon rises in the evening and sets in the morning. It is visible to the southwest and it is high in the sky after midnight.
Moon is passing about ∠7° of ♋ Cancer tropical zodiac sector.
Lunar disc appears visually 8.2% narrower than solar disc. Moon and Sun apparent angular diameters are ∠1787" and ∠1939".
Next Full Moon is the Cold Moon of December 2049 after 25 days on 9 December 2049 at 07:28.
There is low ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at big angle, so their combined tidal force is weak.
The Moon is 18 days old. Earth's natural satellite is moving from the middle to the last part of current synodic month. This is lunation 616 of Meeus index or 1569 from Brown series.
Length of current 616 lunation is 29 days, 13 hours and 21 minutes. It is 1 hour and 5 minutes longer than next lunation 617 length.
Length of current synodic month is 37 minutes longer than the mean length of synodic month, but it is still 6 hours and 26 minutes shorter, compared to 21st century longest.
This lunation true anomaly is ∠278.3°. At the beginning of next synodic month true anomaly will be ∠310.2°. The length of upcoming synodic months will keep decreasing since the true anomaly gets closer to the value of New Moon at point of perigee (∠0° or ∠360°).
10 days after point of perigee on 2 November 2049 at 18:03 in ♒ Aquarius. The lunar orbit is getting wider, while the Moon is moving outward the Earth. It will keep this direction for the next 3 days, until it get to the point of next apogee on 16 November 2049 at 15:07 in ♌ Leo.
Moon is 401 083 km (249 221 mi) away from Earth on this date. Moon moves farther next 3 days until apogee, when Earth-Moon distance will reach 404 490 km (251 338 mi).
2 days after its descending node on 10 November 2049 at 14:33 in ♉ Taurus, the Moon is following the southern part of its orbit for the next 11 days, until it will cross the ecliptic from South to North in ascending node on 25 November 2049 at 00:12 in ♐ Sagittarius.
15 days after beginning of current draconic month in ♏ Scorpio, the Moon is moving from the second to the final part of it.
1 day after previous North standstill on 12 November 2049 at 00:36 in ♊ Gemini, when Moon has reached northern declination of ∠21.270°. Next 12 days the lunar orbit moves southward to face South declination of ∠-21.274° in the next southern standstill on 26 November 2049 at 06:55 in ♐ Sagittarius.
After 11 days on 25 November 2049 at 05:35 in ♐ Sagittarius, the Moon will be in New Moon geocentric conjunction with the Sun and this alignment forms next Sun-Moon-Earth syzygy. | 0.848363 | 3.02898 |
Moon* ♈ Aries
Moon phase on 29 March 2014 Saturday is Waning Crescent, 28 days old Moon is in Pisces.Share this page: twitter facebook linkedin
Previous main lunar phase is the Last Quarter before 5 days on 24 March 2014 at 01:46.
Moon rises after midnight to early morning and sets in the afternoon. It is visible in the early morning low to the east.
Moon is passing about ∠22° of ♓ Pisces tropical zodiac sector.
Lunar disc appears visually 0.9% wider than solar disc. Moon and Sun apparent angular diameters are ∠1938" and ∠1921".
Next Full Moon is the Pink Moon of April 2014 after 16 days on 15 April 2014 at 07:42.
There is medium ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at very acute angle, so their combined tidal force is moderate.
The Moon is 28 days old. Earth's natural satellite is moving from the second to the final part of current synodic month. This is lunation 175 of Meeus index or 1128 from Brown series.
Length of current 175 lunation is 29 days, 10 hours and 45 minutes. It is 45 minutes shorter than next lunation 176 length.
Length of current synodic month is 1 hour and 59 minutes shorter than the mean length of synodic month, but it is still 4 hours and 10 minutes longer, compared to 21st century shortest.
This New Moon true anomaly is ∠25.1°. At beginning of next synodic month true anomaly will be ∠46.9°. The length of upcoming synodic months will keep increasing since the true anomaly gets closer to the value of New Moon at point of apogee (∠180°).
1 day after point of perigee on 27 March 2014 at 18:30 in ♒ Aquarius. The lunar orbit is getting wider, while the Moon is moving outward the Earth. It will keep this direction for the next 10 days, until it get to the point of next apogee on 8 April 2014 at 14:52 in ♋ Cancer.
Moon is 369 801 km (229 784 mi) away from Earth on this date. Moon moves farther next 10 days until apogee, when Earth-Moon distance will reach 404 503 km (251 347 mi).
10 days after its ascending node on 19 March 2014 at 06:30 in ♎ Libra, the Moon is following the northern part of its orbit for the next 2 days, until it will cross the ecliptic from North to South in descending node on 1 April 2014 at 02:30 in ♈ Aries.
10 days after beginning of current draconic month in ♎ Libra, the Moon is moving from the beginning to the first part of it.
6 days after previous South standstill on 23 March 2014 at 07:28 in ♐ Sagittarius, when Moon has reached southern declination of ∠-18.997°. Next 6 days the lunar orbit moves northward to face North declination of ∠18.957° in the next northern standstill on 5 April 2014 at 07:12 in ♊ Gemini.
After 1 day on 30 March 2014 at 18:45 in ♈ Aries, the Moon will be in New Moon geocentric conjunction with the Sun and this alignment forms next Sun-Moon-Earth syzygy. | 0.848363 | 3.141832 |
Tabby’s Star: Exomoon’s Slow Annihilation Could Explain the Dimming of the Most Mysterious Star in the Universe
For years, astronomers have looked up at the sky and speculated about the strange dimming behavior of Tabby’s Star. First identified more than a century ago, the star dips in brightness over days or weeks before recovering to its previous luminosity. At the same time, the star appears to be slowly losing its luster overall, leaving researchers scratching their heads.
Now, astronomers at Columbia University believe they’ve developed an explanation for this oddity.
In a new paper published in the Monthly Notices of the Royal Astronomical Society, astrophysicists Brian Metzger, Miguel Martinez and Nicholas Stone propose that the long-term dimming is the result of a disk of debris—torn from a melting exomoon—that is accumulating and orbiting the star, blocking its light as the material passes between the star and Earth.
“The exomoon is like a comet of ice that is evaporating and spewing off these rocks into space,” said Metzger, associate professor of astrophysics at Columbia University and principal investigator on the study. “Eventually the exomoon will completely evaporate, but it will take millions of years for the moon to be melted and consumed by the star. We’re so lucky to see this evaporation event happen.”
Tabby’s Star, also known as KIC 8462852 or Boyajian’s Star, is named after Tabetha Boyajian, the Louisiana State University (LSU) astrophysicist who discovered the star’s unusual dimming behavior in 2015. Boyajian found that Tabby’s Star occasionally dips in brightness—sometimes by just 1 percent and other times by as much as 22 percent – over days or weeks before recovering its luster. A year later, LSU astronomer Bradley Schaefer discovered that the star’s brightness is also becoming fainter overall with time, dimming by 14 percent between 1890 and 1989.
Scientists around the world have proposed a variety of theories, ranging from comet storms to alien “megastructures,” to explain the short-term dips in brightness, but very recently agreed on a much more mundane culprit—dust.
As an exoplanet is destroyed by strong interactions or collisions with its parent star, Metzger explained, the exomoon orbiting the exoplanet can become vulnerable to the pull of the system’s central star. The force can be so great that the star rips the exomoon away from its planet, causing the exomoon to either collide with a star or otherwise be ejected from the system.
In a small percentage of cases, however, the star steals the exomoon and places it into a new orbit around itself. In this new orbit, the icy, dusty exomoon is exposed to radiation from the star that rips apart its outer layers, creating dust clouds that are eventually blown out to the solar system. When those clouds of dust pass between the star and Earth, intermittent dips in brightness are observed.
This explains the short-term, inconsistent dimming of Tabby’s Star, but researchers have had a harder time explaining the long-term overall fading.
The Columbia team suggests that Tabby’s Star abducted an exomoon from a now long-gone, nearby planet and pulled it into orbit around itself, where it has been getting torn apart by stronger stellar radiation than existed in its former orbit. Chunks of the exomoon’s dusty outer layers of ice, gas, and carbonaceous rock have been able to withstand the radiation blow-out pressure that ejects smaller-grain dust clouds, and the volatile, large-grain material has inherited the exomoon’s new orbit around Tabby’s Star, where it forms a disk that persistently blocks the star’s light. The opaqueness of the disk can change slowly, as smaller-grain clouds pass through and larger particles stuck in orbit move from the disk toward Tabby’s Star, eventually getting so hot that they melt and fall onto the star’s surface.
Ultimately, after millions of years, the exomoon orbiting Tabby’s Star will completely evaporate, the researchers suggest.
Martinez, a Columbia College alumnus (CC’19) and researcher working with Metzger, said the team’s model is unique in its hypothesis of what drives the original planet toward the star in the first place. “It naturally results in the orphaned exomoons ending up on (highly eccentric) orbits with precisely the properties previous research had shown were needed to explain the dimming of Tabby’s star,” Martinez said. “No other previous model was able to put all these pieces together.”
There are other stellar systems that demonstrate unusual brightness dips, Martinez said, and there may be other explanations for the flux that are equally compelling. Tabby’s Star is unusual because it is very similar to Earth’s sun but is exhibiting drastically different behavior. It is the only star like it among the one million stars observed by Kepler, but there are many million times more stars in the universe that have yet to be observed.
The challenge now is finding other stars like Tabby’s that have abducted exomoons and have not yet finished annihilating them. If the team’s explanation is correct, Metzger said, it indicates that moons are a common feature of exoplanetary systems, thereby providing a way to probe the existence of exomoons.
“We don’t really have any evidence that moons exist outside of our solar system, but a moon being thrown off into its host star can’t be that uncommon,” he said. “This is a contribution to the broadening of our knowledge of the exotic happenings in other solar systems that we wouldn’t have known 20 or 30 years ago.”
— Jessica Guenzel | 0.918927 | 3.833031 |
A direct and initial feeling when faced with something incomprehensible or sublime.
A more reflective feeling one has when unable to put things back into a familiar conceptual framework.
Comprehension. It’s where the feelings of awe and wonder begin. It’s what it comes down to. You are either able to comprehend the distance, the numbers, the size, the effort, the workload, the accomplishment, or you can’t. The notion of whether something is incomprehensible or not is as old as, well, us.
There have been significant discoveries, such as the ‘Warren Field Calendar’ and the ‘Nebra Disk,’ that shows that early cultures identified celestial objects and documented the tracking of lunar phases, and that they used that knowledge in their agricultural societies; in which the harvest depended on planting at the correct time of year, and for which the full moon was the only lighting for night-time travel into city markets. But only with Galileo, who was the first to use a telescope to observe the sky, and after constructing a 20x refractor telescope discovered the four largest moons of Jupiter in 1610, have our ancestors observed the planets in the night sky and wondered if they might be worlds like our very own. Today we know otherwise, but what we do have in common, as it was then and as it is today, gazing at the heavens elicits feelings of awe and wonder. After all, we are voyagers, we are pioneers. There wasn’t a pivotal moment when the first member of the Homo erectus family decided to pack their bag, carry the fire and leave the cave to see what was over the next valley; we were already on our way and needed to use the cave for shelter against the rain. That was it. One night’s shelter from the storm, we didn’t even unpack.
Around 140,000 years later, the very first inroad into scientific and philosophical thinking put the night sky at the center of curiousity. Thales of Miletus (624 – 546 BC), whom to many, most notably Aristotle, regarded him as the first philosopher in the Greek tradition, and is recognized as the first individual in Western civilization known to have entertained and engaged in scientific philosophy, was an accomplished astronomer who successfully predicted the solar eclipse of May 28th, 585 BC. Thales also described the position of Ursa Minor, and thought the constellation might be useful as a guide for navigation at sea. He calculated the duration of the year and the timings of the equinoxes and solstices, and he is credited with the first observation of the Hyades and with calculating the position of the Pleiades.
Of course, you do not need be a philosopher to be awestruck by astronomical beauty.
For the past few years (at the time of writing); Shaun Gallagher, a Professor who held the Lillian and Morrie Moss Chair of Excellence at the University of Memphis, has headed a research team of philosophers, psychologists, neuroscientists, engineers, and art historians in a project to study a special case of awe and wonder. He was interested in specific experiences reported by Astronauts and Cosmonauts during space travel, who described deeply aesthetic, spiritual, and sometimes religious experiences generated in almost all cases by visual stimuli—views of deep space or of earth as seen from the windows of the space shuttle or the International Space Station (ISS).
In a complex analysis of the Astronauts’ journals and interviews, Professor Gallagher found explicit descriptions of 34 different categories of experience related to these definitions. They included:
“…for example, experiences of being captured by or drawn to the view of the earth from the ISS; a feeling of connectedness with what they were seeing; a feeling of contentment (tranquility); a dream-like feeling (a feeling of unreality); a feeling of elation; a feeling of being overwhelmed; an experience of a perspective shift (a change of moral attitude); an experience of scale effects (feelings of the vastness of the universe or one’s own smallness or insignificance); and so on. It was important to have good descriptions and categories of the astronauts’ experiences for our attempt to replicate them.”
Professor Gallagher then went on to say;
“We were interested in answering a variety of questions. What are experiences of awe and wonder during space travel really like? What is the actual phenomenology? What aspects of the environment motivate such experiences?
First, you suddenly get the feeling that, hey, this is just one small planet which is lost in the middle of space… A very important feeling about the fact that we’re just drifting through an immense universe. . . You become a little more conscious about the fact that we shouldn’t be doing silly things on Earth like fighting and killing each other.
So, it’s a different view on Earth… I think it just really makes you feel less important when you look at everything in such a view like that. You’re just a speck on the Earth that’s in a universe of many different planets. You’re small compared to everything else, and I didn’t feel too bad, but it kinda makes me feel like my problems now are not really as big as I think they are compared to everything else in the world.
Second, views of earth in near-earth orbit elicited higher responses of awe and wonder than did views of deep space.
Third, we were able to track definite changes in EEG data (i.e., greater alpha suppression in both the frontal and the occipital/parietal areas, in both left and right hemispheres) correlated to experiences of awe and wonder.
Finally, and perhaps surprisingly, those subjects who indicated higher measures of religiosity (specifically those who expressed a more intense connection with a higher power and those who engage more in religious practices) experienced less awe and wonder than those who indicated lower measures on this scale. One possible way to explain this result is that those with higher religiosity scores may better be able to incorporate the space-related experiences into their expectations or conceptual schemas, thereby undermining conditions for experiencing awe and wonder.”
Reading Professor Gallagher’s “A neurophenomenology of awe and wonder: Towards a non-reductionist cognitive science,” I got to thinking; Why do we feel awe? What sparked MY feelings of awe and wonder, my passion for space, specifically space exploration and astronomy? After all, as one does not need be a philosopher to be awestruck by astronomical beauty, it’s becoming increasingly apparent that one does not need to be an Astronaut either. So where did it come from?
I was 10 years of age when I first holidayed in Florida, travelling from London UK, and visited the John F. Kennedy Space Center. It left an indelible mark. I felt at home (and still do when I visit as often as I can). Within day’s of my return I had penned a letter to NASA firstly; asking ‘How do I become an Astronaut?’, and secondly; simply thanking them for everything they had ever done, ever. They replied with a pack of information that went far beyond anything I had ever expected to have received (and in a untearable ‘tyvek’ envelope which I remember greatly impressed me also). The pack contained information for study seminars and groups local to me and internationally, career advice, history books, material samples from space suits, NASA center pamphlets, facility badges and stickers, as well as posters and booklets from each. I was hooked.
I remember being in awe of NASA’s generousity, that this amount of information couldn’t possibly be sent to every child that had written to them asking how to become an Astronaut, but then, perhaps they did? This was only five years after the Space Shuttle Challenger, or STS-51-L, had exploded 73 seconds into its flight, killing all 7 of its Astronauts onboard. The mission of Challenger was the spread of ‘Science, Technology, Engineering and Mathematics‘ (STEM) education, and that’s really the legacy of Challenger that continues today – that mission, that focus on outreach and education.
Having found my passion I needed a tangible connection, some accessibility to NASA, something that would trigger my awe and wonder in space exploration each and every time I saw it. Something to fuel my inspiration, to reconnect me to how I felt when I was there standing on that hallowed ground in Florida. And then I found it, my space oddity.
The Command Module of Apollo 10 basks under spotlights on the ground floor of London’s Science Museum. It was first loaned to the Science Museum in 1976 from the Smithsonian’s National Air and Space Museum, and has remained on an extended loan ever since, and it was a little under 9 miles from my front door.
image credit: Nasa History Office
The Apollo 10 spacecraft was launched from Cape Kennedy at 12:49 p.m., EDT, on May 18, 1969. This liftoff marked the fourth manned Apollo launch in the short space of seven months. After the spacecraft completed one and a half revolutions of the Earth, the S-IVB booster stage was reignited to increase the speed of the spacecraft to the velocity required to escape the gravitational attraction of the Earth. Three days later, the spacecraft was placed in a 60 by 170 nautical miles orbit around the Moon. After the spacecraft completed two revolutions of the Moon, orbit was circularized to 60 nautical miles by a second burn of the service propulsion system.
image credit: Nasa History Office
The Apollo 10 mission encompassed all aspects of an actual crewed lunar landing, except the landing. And was a complete staging of the Apollo 11 mission with Astronauts Thomas Stafford and Eugene Cernan descended inside the Lunar Module (LM) to within 14km of the lunar surface, achieving the closest approach to the Moon before Apollo 11 landed two months later. It was the first flight of a complete, crewed Apollo spacecraft to operate around the moon. Objectives included a scheduled eight-hour lunar orbit of the separated LM, and descended to altitude of less than 9 miles (47,000 feet/14,326 meters) above the Moon. At this altitude, two passes were made over the future Apollo 11 landing site before the LM ascending for rendezvous and docking with Astronaut John W. Young in the Command and Service Module, or CSM, in a 70-mile circular lunar orbit. Pertinent data to be gathered in this landing rehearsal dealt with the lunar potential, or gravitational effect, to refine the Earth-based crewed spaceflight network tracking techniques, and to check out LM programmed trajectories and radar, and lunar flight control systems. Twelve television transmissions to Earth were planned, and all mission objectives were achieved.
On May 24, the service propulsion system was reignited, and the astronauts began the return journey to Earth. Splashdown occurred at 12:52 p.m. on May 26, 1969, less than 4 miles (6.4 km) from the target point and the recovery ship. From the Moon to South Kensington.
While Apollo 10’s Command Module would seem to have little connection to London, other than its longterm residence, the relationship runs deeper. The SaturnV rockets were developed directly from Nazi V2 rocket technology after the Americans took many surplus V2s back with them across the Atlantic at the end of the war, along with the rocket programme’s technical director Wernher von Braun. These rockets rained down on London between the 8th September, 1944 and the 27th March, 1945, only two and a half decades before Apollo 10 flew to the Moon, (one of these V-weapons stands just metres away from the Apollo 10 capsule itself in the Science Museum, aptly titled “V2: The rocket that launched the space age.”)
image credit: apollocomic
The Apollo 10 Command Module had been displayed across Europe before coming to London. However, touring an artifact that weighs nearly seven tons, and described as in a state of ‘considerable disrepair’, it quickly became a logistical nightmare and required much funding and planning, but once the Command Module came to London, efforts to move it anywhere else seemed too costly and difficult. So the Command Module settled in.
Today, I look upon the Apollo 10 Command Module as though I were seeing it either for the first or last time. I have the overwhelming feelings of awe and wonder that were first elicited by my visit to the John F. Kennedy Space Center in Florida return, and it was found on my door step in 1991. There is an overwhelming feeling of reverence, admiration, and even fear for the risks that were involved in the pursuit to the Moon (after all, the SaturnV had 6 million components. Even if NASA achieved it’s target 99.9% success rate, that would mean 6,000 components failed on a good launch.) It is a feeling produced by that which is grand, yet sublime, and without doubt; extremely powerful.
I visit the London Science Museum at almost every opportunity, and I’ve visited the John F. Kennedy Space Center at least once every two years for as long as I can remember; and I am yet to lose the feeling of wonder, and of delight, that this notion of awe should reside in all the things around me when I walk through the entrance gates.
With that rationale, if you need me, you know where to find me.
title image credit: Nasa History Office | 0.87564 | 3.016303 |
What holds small asteroids together? Surely not gravity, they’re too small for that. Today, Daniel Scheeres and buddies at the University of Colorado enlighten us with a study of the forces at work in these small bodies.
In 2005, the Japanese Hayabusa mission circled and landed on the potato-shaped asteroid Itokawa, which measures just a few hundred metres in size. (It is due to return to Earth later this year with a sample of asteroid dust.)
Spin rate statistics suggest that Ikotawa and asteroids like it are piles of rubble held together by gravity on scales of 150 metres and larger. But smaller boulders should fly off into space at this rate of spin.
But that creates a puzzle. Images from Hayabusa show that on smaller scales, Ikotawa is little more than a collection of boulders and dust. But if gravity cannot beat the centripetal forces involved, what’s holding Ikotawa together?
Astronomers have known for some time that the forces involved do not need to be large: various simulations have shown that even small cohesive forces can make spinning piles of rubble stable in low gravity environments.
Of the various possibilities, the main ones that astronomers have studied are radiation pressure from the Sun, friction and electrostatic forces between ionised dust (which is responsible for dust levitation on the Moon and so more likely to push dust apart).
The goal of the latest work by Scheeres and company is to “perform a survey of the known relevant forces that act on grains and particles, state their analytical form and relevant constants for the space environment, and consider how these forces scale relative to each other.”
Scheeres and co show that none of the usual suspects is the likely culprit. Instead it looks as if small asteroids are held together by van der Waals forces.
That has two interesting implications. First, for asteroid evolution. Scheeres and co suggest that spinning asteroids gradually throw off larger boulders until they end up as rubble piles held together by van der Waals forces. That may help to explain the size distribution of asteorids.
Second, this process may also explain, at least in part, the formation of planetary rings such as those around Saturn which are made up exclusively of small bodies.
If Scheere and co are right, their conclusions will lead to a significant re-assessment of the surface properties of asteroids, not to mention of the structure and evolution of planetary rings. No small feat.
Ref:arxiv.org/abs/1002.2478: Scaling Forces To Asteroid Surfaces: The Role Of Cohesion | 0.813052 | 3.918992 |
Yesterday I talked about apparent sizes, and how Pluto can appear smaller than a distant galaxy, even though the galaxy is much farther away. It turns out, however, that on really cosmic scales apparent size is only part of the story. That’s because the universe is expanding.
In astronomy we generally don’t talk about the distance of far galaxies. Instead we refer to their redshift, often known as z. The reason for this is that the redshift of a galaxy is pretty unambiguous. You measure the spectrum of a galaxy, then compare a known emission or absorption line with what we observe here on Earth. From the difference between their two wavelengths, we can calculate z.
Now it is basically true that the bigger the redshift a galaxy has, the greater its distance. But because the universe is expanding, there are different distances that can be defined. You can, for example, define distance by the time it has taken light to travel from the galaxy to us. We can define distance as how far away the galaxy was when its light began its journey. We can define distance as how far away it is now. So for example, a galaxy with a redshift of z = 3 was about 5.2 billion light years away when the light we observe left the galaxy. The light travelled for about 11.5 billion years, and the galaxy is now about 21 billion light years away. That can be a bit difficult to wrap your head around, which is why we typically just stick with z.
Strange as all this is, it has a very real effect on what we observe in the distant universe. When we talk about apparent size, we usually refer to an objects angular diameter. This depends upon its actual diameter and its distance from us. In a static universe everything would be simple, and the more distant an object is, the smaller its angular diameter would be.
But cosmic expansion changes all that. Since the universe is expanding, distant objects will appear to increase in size. The object isn’t getting larger, but as the universe expands the light traveling from a distant galaxy appears to spread out a bit. For closer galaxies this isn’t significant, but for distant galaxies it is.
For close galaxies, the greater the distance, the smaller their apparent diameter, but around z = 1.5 cosmic expansion becomes a bigger factor than the galaxy’s distance. As a result, galaxies with higher redshifts actually start appearing larger. The most distant galaxies can appear significantly larger than closer galaxies. This doesn’t mean that distant galaxies are actually larger, simply that they appear larger due to cosmic expansion.
What’s particularly interesting about all this is that we can use this effect to determine the way in which the universe is expanding. This is part of the reason we know the universe contains dark matter and dark energy. But that’s a story for another day. | 0.810794 | 3.960621 |
Lepus constellation lies in the northern sky, just under the feet of Orion. The constellation’s name means “the hare” in Latin.
Lepus is not associated with any particular myth, but is sometimes depicted as a hare being chased by the mythical hunter Orion or by his hunting dogs, represented by the constellations Canis Major and Canis Minor. Lepus was first catalogued by the Greek astronomer Ptolemy in the 2nd century.
The constellation is home to the famous variable star R Leporis, better known as Hind’s Crimson Star, and it contains several notable deep sky objects: Messier 79 (NGC 1904), the irregular galaxy NGC 1821, and the Spirograph Nebula (IC 418).
FACTS, LOCATION & MAP
Lepus is the 51st constellation in size, occupying an area of 290 square degrees. It is located in the second quadrant of the northern hemisphere (NQ2) and can be seen at latitudes between +63° and -90°. The neighboring constellations are Caelum, Canis Major, Columba, Eridanus, Monoceros and Orion.
Lepus contains a Messier object – Messier 79 (M79, NGC 1904) – and has one star with known planets. The brightest star in the constellation is Arneb, Alpha Leporis, with an apparent magnitude of 2.58. There are no meteor showers associated with Lepus.
Lepus is usually depicted as a hare being hunted by Orion or by his hunting dogs. The constellation is located under Orion’s feet. It is not associated with any particular myth. Sometimes it is also represented as a rabbit, also chased by Orion and his dogs.
Alpha Leporis, the brightest star in the constellation, has the name Arneb, which means “the hare” in Arabic. The hare’s ears are delineated by the stars Kappa, Iota, Lambda and Nu Leporis.
MAJOR STARS IN LEPUS
Arneb – α Leporis (Alpha Leporis)
Alpha Leporis, the brightest star in Lepus, is a lower luminosity yellow-white supergiant star with an apparent magnitude of 2.589. It is approximately 2,200 light years distant from the solar system. It has the stellar classification F0 Ib.
The star’s proper name, Arneb, comes from the Arabic arnab, which means “the hare.”
Arneb has a mass about 14 times that of the Sun, 129 times the solar radius, and it is 32,000 times more luminous. It is believed to be about 13 million years old.
Alpha Leporis is a very old, dying star which is either still expanding or has passed through the supergiant stage and is in the process of contracting and heating up. It is expected to end its life in a supernova explosion.
Nihal – β Leporis (Beta Leporis)
Beta Leporis has the stellar classification G5 II. It is a yellow bright giant with an apparent magnitude of 2.84, approximately 160 light years distant from the Sun. Its traditional name, Nihal, means “quenching their thirst.”
The star has 3.5 solar masses and 16 times the solar radius. It is believed to be about 240 million years old.
Beta Leporis is a double star system and possibly a binary star. It is composed of two stars separated by 2.58 arc seconds. The companion star is a suspected variable.
ε Leporis (Epsilon Leporis)
Epsilon Leporis is an orange giant star with the stellar classification K4 III. It has an apparent magnitude of 3.166 and is approximately 213 light years distant.
The star has 40 times the Sun’s radius and 1.70 times the mass. It is believed to be about 1.72 billion years old. It is 372 times more luminous than the Sun.
μ Leporis (Mu Leporis)
Mu Leporis is a blue-white subgiant star with the stellar classification of B9 IV:HgMn. It has an apparent magnitude of 3.259 and is approximately 186 light years distant from the solar system.
The star has 3.4 times the Sun’s radius. It a suspected variable star of the Alpha-2 Canum Venaticorum type, with a period of about two days. The star’s spectrum has overabundances of manganese and mercury.
An X-ray source has been detected at an angular separation of 0.93 arc seconds from the star. This might be a star that is not yet on the main sequence, or a small low-temperature star.
ζ Leporis (Zeta Leporis)
Zeta Leporis has the stellar classification of A2 IV-V(n). The (n) indicates that the absorption lines in the star’s spectrum look nebulous because the star is a rapid spinner, which causes the absorption lines to broaden as a result of the Doppler shift. The star has a rotational velocity of 245 km/s.
Zeta Leporis is a white main sequence star which is evolving into a subgiant. The star has an apparent magnitude of 3.524 and is approximately 70.5 light years distant from the solar system. A massive asteroid belt was confirmed in the star’s orbit in 2001. This was the first extra-solar asteroid belt ever discovered.
The star has 1.46 times the Sun’s mass and 1.5 times the solar radius. It is 14 times more luminous than the Sun. It is believed to be about 231 million years old.
γ Leporis (Gamma Leporis)
Gamma Leporis is a yellow-white main sequence star belonging to the stellar class F6V. It has an apparent magnitude of 3.59 and is 29.3 light years distant. It is a member of the Ursa Major Moving Group.
Gamma Leporis is slightly larger than the Sun, with 1.2 times the Sun’s radius and 1.3 times the solar mass. It is a high-priority target for the Terrestrial Planet Finder mission.
17 Leporis (SS Leporis)
17 Leporis is a spectroscopic binary with a combined visual magnitude that varies between 4.82 and 5.06.
The components belong to the spectral classes A1 and M3-4.5 and have a period of 260.34 days.
17 Leporis is approximately 1,100 light years distant from the solar system.
η Leporis (Eta Leporis)
Eta Leporis has the stellar classification of F2V. It is a yellow-white dwarf with an apparent magnitude of 3.719, about 49.1 light years distant from the Sun. Excess infrared emission has been detected coming from the star, indicating that it has a dust disk.
Eta Leporis has 1.5 times the Sun’s radius and 1.42 times the mass.
δ Leporis (Delta Leporis)
Delta Leporis is an orange subgiant star with the stellar classification K1IVFe-0.5. It has an apparent magnitude of 3.81 and is approximately 114 light years distant from the solar system.
RX Leporis is a semi-regular pulsating star with the stellar classification M6.2III. It is a red giant with an apparent magnitude that varies between 5 and 7.4.
Hind’s Crimson Star – R Leporis
R Leporis is a carbon star with the stellar classification of C7,6e(N6e). It is a well-known variable, showing variations in magnitude that range from 5.5 to 11.7.
It is classified as a long-period Mira variable. It has a period of 418-441 days, and a secondary period of about 40 years.
The star was discovered by the British astronomer J. R. Hind in 1845, and named Hind’s Crimson Star after him. He described the star as appearing “like a drop of blood on a black field.”
R Leporis is a distinctly red star located near the border with Eridanus constellation. It appears reddest when it is dimmest, and during these periods, which occur every 14.5 months, it may be the most visible reddest star in the sky. The intense redness may be the result of the carbon in the star’s outer atmosphere removing the blue part of its visible light spectrum.
Hind’s Crimson Star is approximately 1,300 light years distant. It has a radius about 500 times that of the Sun, and is between 5,200 and 7,000 times more luminous.
Gliese 229 is a red dwarf belonging to the spectral class M1Ve, only 18.8 light years distant from the Sun. The star has 69 percent of the Sun’s radius and 58 percent of its mass. It is a slow rotator, with a projected rotational velocity of 1 km/s at the equator.
Gliese 229 is a low activity flare star, with magnetic activity on its surface causing random increases in brightness. The star’s corona is a source of X-ray emission.
A substellar companion, a brown dwarf of the spectral type T7, was discovered orbiting the star in 1994 and confirmed in 1995. It was the first confirmed substellar-mass object, with a mass of 20 to 50 times that of Jupiter.
T Leporis is another Mira variable in Lepus constellation. It is a red giant star belonging to the stellar class M6II. It has an apparent magnitude of 9.94 and it pulsates with a period of 380 days. With each pulsation, it loses approximately the mass of Earth. The star is about 500 light years distant and has a mass 100 times that of the Sun.
Lepus contains an asterism known as the Throne of Jawza. Sometimes, it is also called the Camels, from the Arabic phrase meaning “camels quenching their thirst.” The stars forming the quadrilateral asterism are α, β, γ and δ Leporis.
DEEP SKY OBJECTS IN LEPUS
Messier 79 (M79, NGC 1904)
Messier 79 is a globular cluster in Lepus. It has an apparent magnitude of 8.56 and is approximately 41,000 light years distant from Earth.
The cluster was discovered by the French astronomer Pierre Méchain in 1780 and subsequently included in Charles’ Messier’s catalogue.
Like Messier 54 in Sagittarius constellation, the other extragalactic globular cluster in Messier’s catalogue, M79 is believed to have originated outside the Milky Way, in the Canis Major Dwarf Galaxy. The Canis Major Dwarf, located in Canis Major constellation, is currently interacting with the Milky Way and is unlikely to remain intact after the encounter.
Spirograph Nebula – IC 418
IC 418 is a planetary nebula in Lepus. It was named the Spirograph Nebula because it has an intricate pattern, similar to those that can be created with a spirograph.
The nebula has an apparent magnitude of 9.6 and is approximately 1,100 light years distant from the solar system.
NGC 1821 is type IB(s)m irregular galaxy in Lepus. It has an apparent magnitude of 14.5. The galaxy was discovered by the American astronomer Frank Leavenworth in 1886.
A supernova, SN 2002bj, was observed in the galaxy in 2002. At first it had an apparent magnitude of 14.7 and was classified as a Type IIn supernova, but in 2008, it was determined that the spectrum resembled that of a Type Ia supernova more closely.
The progenitor star system is believed to have been composed of two white dwarfs, with helium being transferred from one to the other. Once the helium accreted, it exploded in a thermonuclear reaction on the surface of the more massive star, which resulted in the outburst. | 0.881089 | 3.734346 |
A quarter-century for the Hubble Space Telescope
Just 25 years ago, scientists worldwide were celebrating the successful launch of the Hubble Space Telescope. We soon learned, though, that its precisely-figured 2.4-metre mirror had been built to the wrong shape, and we had to wait another three years before corrective optics could be installed to correct its blurred vision. Since then, Hubble has been returning research and a gallery of stunning images that have transformed our understanding of the Universe.
Its findings impact on every area of astronomy, and every distance-scale, from the farthest and earliest galaxies to the processes of star formation and images of objects in our solar system in unprecedented detail. It has also been a key player in the discovery that the entire Universe is expanding at an increasing rate because of a mysterious entity dubbed dark energy.
It is now six years since a shuttle visited to service it for the final time, and its instruments will eventually fail. Its orbit is also decaying because of the tiny atmospheric drag at its current altitude of 545 km, and it may spiral to destruction within another decade or so.
However, we expect that Hubble will still be alive when its successor, the James Webb Space Telescope, the JWST, is launched, hopefully in 2018. With a segmented 6.5-metre mirror, and working between visible and infrared wavelengths, this should build on Hubble’s legacy. The UK Astronomy Technology Centre at Edinburgh’s Royal Observatory has leading roles in the consortium from Europe and NASA that has built one of the JWST’s three main instruments, the Mid-InfraRed Instrument or MIRI.
As the Sun climbs another 7° higher at noon during May, Edinburgh’s days lengthen by almost two hours, although we lose much more than this of nighttime darkness. On the 1st, the Sun is more than 12° below Edinburgh’s horizon, and the sky effectively dark, for a little more than five hours, but by the month’s end this shrinks to only 32 minutes. More accurately, the sky would be dark for these periods were it not for the moonlight at the start and end of the month.
Sunrise/sunset times for Edinburgh vary from 05:30/20:51 BST on the 1st to 04:37/21:45 on the 31st while the Moon is full on the 4th, at last quarter on the 11th, new on the 18th and at first quarter on the 25th.
The conspicuous star Arcturus in Bootes is climbing in the east at nightfall to dominate the high southern sky by our map times although it pales by comparison with the planets Jupiter and Venus which lie further to the west.
Below and right of Arcturus is Virgo and the closest giant cluster of galaxies, the Virgo Cluster. Located some 54 million light years away, and one of Hubble’s earliest targets, it contains up to 2,000 galaxies, more than a dozen of which are visible through small telescopes under a dark sky. Its centre lies roughly midway between the stars Vindemiatrix in Virgo, and Leo’s tail-star Denebola (see map).
Another planet, Saturn, shines at magnitude 0.0 and almost rivals Arcturus in brightness when it reaches opposition at a distance of 1,341 million km on the 23rd. It is then best placed on the meridian in the middle of the night, though it stands only 15° above Edinburgh’s horizon so that telescopic views of its rings and globe, 42 and 18 arcseconds wide respectively, may be hindered by turbulence in our atmosphere.
Currently 1.2° north of the double star Graffias in Scorpius, Saturn creeps westwards into Libra by the day of opposition. The rings have their northern face tilted 24° towards us at present and although this will increase to 26° next year, Saturn itself slides another 2° further south. Catch Saturn to the right of the Moon on the 5th-6th.
This is the best time this year to glimpse Mercury in our evening sky. Until the 11th, it stands 10° or more above the west-north-western horizon forty minutes after sunset before it sinks to set more than two hours later. It dims from magnitude -0.3 on the 1st to 1.0 on the 11th and may be followed through binoculars for just a few more days as it sinks lower and fades to magnitude 1.7 by the 15th. Mercury stands furthest from the Sun (21°) on the 7th and passes around the Sun’s near side at inferior conjunction on the 30th.
The brilliant evening star Venus improves from magnitude -4.1 to -4.3 and is unmistakable in the west at sunset, sinking to set in the north-west after 01:00. From between the Horns of Taurus at present, it tracks eastwards into Gemini to stand 1.7° above-right of the star cluster M35 on the 9th (use binoculars) and end the month 4° to the south of Pollux in Gemini. Venus approaches from 148 million to 113 million km during the period as its gibbous disk swells from 17 to 22 arcseconds and its sunlit portion falls from 67% to 53%.
Jupiter still outshines every star, but is fainter than Venus and stands above and well to its left, their separation in the sky plummeting from 50° on the 1st to 21° on the 31st. Look for Jupiter in the south-west at nightfall at present and much lower in the west by our map times. This month it fades a little from magnitude -2.1 to -1.9 and tracks 3° eastwards to the east of the Praesepe star cluster in Cancer (use binoculars). The planet lies above the crescent Moon and 833 million km away on the 23rd when a telescope shows its cloud-banded disk to be 35 arc seconds across. | 0.830942 | 3.78588 |
« PreviousContinue »
14. CoMETS AND METEORs
Ever since Halley discovered that the comet of 1682 was a member of the solar system, these wonderful objects have had a new interest for astronomers; and a comparison of orbits has often identified the return of a comet, and led to the detection of an elliptic orbit where the difference from a parabola was imperceptible in the small portion of the orbit visible to us. A remarkable case in point was the comet of 1556, of whose identity with the comet of 1264 there could be little doubt. Hind wanted to compute the orbit more exactly than Halley had done. He knew that observations had been made, but they were lost. Having expressed his desire for a search, all the observations of Fabricius and of Heller, and also a map of the comet's path among the stars, were eventually unearthed in the most unlikely manner, after being lost nearly three hundred years. Hind and others were certain that this comet would return between 1844 and 1848, but it never appeared.
When the spectroscope was first applied to finding the composition of the heavenly bodies, there was a great desire to find out what comets are made of. The first opportunity came in 1864, when Donati observed the spectrum of a comet, and saw three bright bands, thus proving that it was a gas and at least partly self-luminous. In 1868 Huggins compared the spectrum of Winnecke's comet with that of a Geissler tube containing olefiant gas, and found exact agreement. Nearly all comets have shown the same spectrum." A very few comets have given bright band spectra differing from the normal type. Also a certain kind of continuous spectrum, as well as reflected solar light showing Frauenhofer lines, have been seen. When Wells's comet, in 1882, approached very close indeed to the sun, the spectrum changed to a monochromatic yellow colour, due to sodium. For a full account of the wonders of the cometary world the reader is referred to books on descriptive astronomy, or to monographs on comets.” Nor can the very uncertain speculations about the structure of a comet's tail be given here. A new explanation has been proposed almost every time that a great discovery * In 1874, when the writer was crossing the Pacific Ocean in H.M.S. “ Scout,” Coggia's comet unexpectedly appeared, and (while Colonel Tupman got its positions with the sextant) he tried to use the prism out of a portable direct-vision spectroscope, without success until it was put in front of the object-glass of a binocular, when, to his great joy, the three band images were clearly seen. * Such as The World of Comets, by A. Guillemin; History of Comets, by G. R. Hind, London, 1859; Theatrum Cometicum, by S. de Lubienietz, 1667; Cometographie, by Pingr , Paris, 1783; Donati's Comet, by Bond.
To define the path of comet 1556. After being lost for 3oo years, this drawing was recovered by the prolonged efforts of Mr. Hind and Professor Littrow in 1856.
has been made in the theory of light, heat, chemistry, or electricity. Halley's comet remained the only one of which a prediction of the return had been confirmed, until the orbit of the small, ill-defined comet found by Pons in 1819 was computed by Encke, and found to have a period of 34 years. It was predicted to return in 1822, and was recognised by him as identical with many previous comets. This comet, called after Encke, has showed in each of its returns an inexplicable reduction of mean distance, which led to the assertion of a resisting medium in space until a better explanation could be found." Since that date fourteen comets have been found with elliptic orbits, whose aphelion distances are all about the same as Jupiter's mean distance; and six have an aphelion distance about ten per cent. greater than Neptune's mean distance. Other comets are similarly associated with the planets Saturn and Uranus. The physical transformations of comets are among the most wonderful of unexplained phenomena in the heavens. But, for physical astronomers, the greatest interest attaches to the reduction of radius vector of Encke's comet, the splitting of Biela's comet into two comets * The investigations by Von Asten (of St. Petersburg) seem to support, and later ones, especially those
by Backlund (also of St. Petersburg), seem to discredit, the idea of a resisting medium.
in 1846, and the somewhat similar behaviour of other comets. It must be noted, however, that comets have a sensible size, that all their parts cannot travel in exactly the same orbit under the sun's gravitation, and that their mass is not sufficient to retain the parts together very forcibly; also that the inevitable collision of particles, or else fluid friction, is absorbing energy, and so reducing the comet's velocity. In 1770 Lexell discovered a comet which, as was afterwards proved by investigations of Lexell, Burchardt, and Laplace, had in 1767 been deflected by Jupiter out of an orbit in which it was invisible from the earth into an orbit with a period of 5% years, enabling it to be seen. In 1779 it again approached Jupiter closer than some of his satellites, and was sent off in another orbit, never to be again recognised. But our interest in cometary orbits has been added to by the discovery that, owing to the causes just cited, a comet, if it does not separate into discrete parts like Biela's, must in time have its parts spread out so as to cover a sensible part of the orbit, and that, when the earth passes through such part of a comet's orbit, a meteor shower is the result. A magnificent meteor shower was seen in America on November 12th–13th, 1833, when the paths of the meteors all seemed to radiate from a point in the constellation Leo. A similar | 0.868521 | 3.855568 |
Scientists have been proposing a new “Planet X” in our solar system. Now two scientists suggest that it’s a black hole. Find out why.
I own a telescope. It’s small and unadorned, but well made. Using it, I’ve been able to see five of the planets in our solar system. The easiest one to spot is Venus because it’s the brightest object in the night sky, not counting the moon. Watching Venus progress from a crescent shape to its full disc, just like the lunar phases, is mind-blowing.
Another easy one is Jupiter, which is almost as bright due to its size. It’s always startling to follow the four largest moons of Jupiter in their graceful orbits. I think the most beautiful planet for stargazers is Saturn, not only because we can see its rings, but because of its golden colour. The trickiest one is Mercury. It’s so close to the sun that we can only catch it very close to the horizon. The sun’s light makes it hard to locate.
Discovering the outer planets took both a telescope and some specific analytical skills.
Readers may wonder why I’ve only managed to look at five of the planets. Honestly, that’s not a bad record. I’m doing as well as Galileo and everybody else up until the 18th century. These are all the planets we can spot with the naked eye to aim our telescopes. Discovering the outer planets took both a telescope and some specific analytical skills.
Uranus came first. William Herschel noticed it in 1781 without looking for it. He was using his telescope to catalogue all the objects we can’t quite see with the naked eye. Herschel noticed a faint, blurry object moving in front of the stars. He thought it might be a comet, but when he and the Astronomer Royal did the math, they found that it was a new planet.
Speaking of doing the math, that’s how three scientists independently found Neptune in 1846. In 1845, analyzing the orbit of Uranus, Urbain Levarier and John Couch Adams each used calculations to show that there was a planet beyond Uranus and to point to where it should be in the sky. Johann Gottfried Galle spotted it in 1846.
In the same way, by analyzing the orbits of Uranus and Neptune, Percival Lowell predicted the existence of Pluto in 1905. He never lived to see it, but Clyde Tombaugh found it in 1930. Pluto was demoted to a dwarf planet in 2006. I felt sad about this until I found out that it’s the biggest dwarf planet in our solar system. I guess it’s now a big fish in a small pond.
For the last five years or so, they’ve been suggesting that there is another planet out there.
Scientists are still doing the math. For the last five years or so, they’ve been suggesting that there is another planet out there. For now, they call it Planet 9, or P9 for short. Some call it Planet X. Last week, Jakub Scholtz from Durham University and James Unwin from the University of Illinois suggested something more radical. They believe that this potential planet is actually a small black hole.
They get this idea from two findings. The orbits of the small objects outside Neptune don’t match our calculations, even taking Pluto into account. Also, the Optical Gravitational Lensing
Experiment (OGLE) is finding that something is bending the light from distant stars. This type of bending effect happens when starlight goes past massive objects, like the proposed Planet X.
They go on to propose that one of these black holes is orbiting our sun
Most astronomers believe that the “gravitational lensing” is caused by a series of free-floating planets. Scholtz and Unwin suggest that they’re a series of primordial black holes. They go on to propose that one of these black holes is orbiting our sun.
If they’re right about this, the black hole would be surrounded by a ring of dark matter. If dark anti-matter collided with that dark matter, it would produce bursts of gamma rays. That’s the next step for these scientists. They plan to go through the data from the Fermi Gamma-Ray Space Telescope. They hope to see clustered, intermittent gamma-ray flashes moving slowly through space.
We need to get a grasp on the nature of what we call dark matter for lack of a better term.
Nobody, including the researchers, knows if they will ever prove that Planet X is a black hole. That’s not necessarily the point. Their detailed analysis will unlock new information about dark matter and gamma-rays. They’re sure to learn things from it, even it doesn’t prove their point.
We need to know more about black holes and especially about dark matter. Right now, dark matter makes up about 27% of our universe, while everything we can actually observe represents only 5%. To understand our role in the cosmos, we need to get a grasp on the nature of what we call dark matter for lack of a better term. | 0.905007 | 3.439034 |
Using optical morphologies from the Hubble Space Telescope and infrared photometry from the Wide-field Infrared Survey Explorer, a team of astronomers found 29 objects with outflowing winds measuring up to 2,500 kilometers per second.
Fierce galactic winds powered by an intense burst of star formation may blow gas right out of massive galaxies, shutting down their ability to make new stars.
Sifting through images and data from three telescopes, a team of astronomers found 29 objects with outflowing winds measuring up to 2,500 kilometers per second, an order of magnitude faster than most observed galactic winds.
“They’re nearly blowing themselves apart,” said Aleksandar Diamond-Stanic, a fellow at the University of California’s Southern California Center for Galaxy Evolution, who led the study. “Most galactic winds are more like fountains; the outflowing gas will fall back onto the galaxies. With the high-velocity winds we’ve observed the outflowing gas will escape the galaxy and never return.” Diamond-Stanic and colleagues published their findings in Astrophysical Journal Letters.
The galaxies they observed are a few billion light years away with outflowing winds of 500 to 2,500 kilometers per second. Initially they thought the winds might be coming from quasars, but a closer look revealed these winds emanate from entire galaxies.
Young, bright and compact, these massive galaxies are in the midst of or just completing a period of star formation as intense as anyone has ever observed.
“These galactic-scale crazy-fast winds are probably driven by the really massive stars exploding and pushing out the gas around them,” said Alison Coil, professor in UC San Diego’s Center for Astrophysics and Space Sciences and a co-author of the paper. “There’s just such a high density of those stars it’s like all these bombs went off near each other at the same time. Each bomb evacuates the area around it, then the next can push gas out further until they’re evacuating gas on the scale of the whole galaxy.”
Galaxies with winds this fast are also quite rare, opening up the question of whether these are unusual events or part of a common phase in the evolution of massive galaxies that is seldom observed because it is so brief.
Astrophysicists still lack an explanation for how and why star making ends. Theorists who model the evolution of galaxies often invoke supermassive black holes called active galactic nuclei, which can also generate savage winds, to explain how gas needed to form stars can be depleted.
These new observations demonstrate that black holes may not be neccesary to account for how these kinds galaxies run out of gas. “The winds seem to be powered by the starburst,” Diamond-Stanic said. “The central supermassive black hole is apparently just a spectator for these massive stellar fireworks.”
Image: X-ray: NASA/CXC/JHU/D.Strickland; Optical: NASA/ESA/STScI/AURA/The Hubble Heritage Team; IR: NASA/JPL-Caltech/Univ. of AZ/C. Engelbrach; Chandra X-Ray Observatory | 0.883356 | 4.075902 |
Scientists searching for astronomical objects in the early universe, not long after the Big Bang, have made a record-breaking, two-for-one discovery.
Using ground-based telescopes, a team of astronomers have discovered the most distant supermassive black hole ever found. The black hole has a mass 800 million times greater than our sun, which earns it the “supermassive” classification reserved for giants like this. Astronomers can’t see the black hole, but they know it’s there because they can see something else: A flood of light around the black hole that can outshine an entire galaxy. This is called a quasar, and this particular quasar is the most distant one ever observed.
The light from the quasar took more than 13 billion years to reach Earth, showing us a picture of itself as it was when the universe was just 5 percent of its current age. Back then, the universe was “just” 690 million years old. The hot soup of particles that burst into existence during the Big Bang was cooling rapidly and expanding outward. The first stars were starting to turn on, and the first galaxies beginning to swirl into shape. Quasars from this time are incredibly faint compared to the nearest quasars, the light from some of which takes just 600 million light years to reach the Earth.
“It’s like finding the needle in a haystack,” said Eduardo Bañados, an astronomer at the Carnegie Institution for Science who led the international research team. Their double discovery is described in a study published Wednesday in Nature.
Black holes, mysterious as they are, are among the most recognizable astronomical phenomena in popular science. They’re pretty straightforward: Black holes are spots in space where the tug of gravity is so strong that not even light can escape. They gobble up gas and dust and anything that comes near, growing and growing in size. A supermassive black hole sits in the center of virtually all large galaxies, including the Milky Way. Astronomers can infer their existence by watching fast-moving stars hurtle around a seemingly empty, dark region.
Quasars, meanwhile, are a little trickier to understand, and you’d be forgiven for thinking they sound like something out of Star Trek. A quasar is, to put it simply, the product of a binge-eating black hole. A black hole consumes nearby gas and dust inside a galaxy with intense speed, and the violent feast generates a swirling disk of material around it as it feeds. The disk heats up to extreme temperatures on the order of 100,000 degrees Kelvin and glows brightly. The resulting light show is what we call a quasar, and what a light show it is.
“A quasar emits more light than an entire galaxy’s worth of stars, and it’s actually just a glowing disk of material that is the size of our solar system,” said Daniel Mortlock, an astrophysicist at Imperial College London and Stockholm University. In 2011, Mortlock and his colleagues reported their discovery of the most distant quasar found at the time.
The more material a black hole consumes, the bigger it becomes. Eventually, the black hole drains the surrounding area of material and has nothing to eat. The luminous disk around it shrinks and fades, and the quasar is extinguished. In this way, quasars—and the black holes that power them—are like volcanoes, erupting under one set of conditions and settling into dormancy under another.
Quasars were first detected in 1963 by the Dutch astronomer Maarten Schmidt with California’s Palomar Observatory. Astronomers thought these newly discovered points of light were stars because of their extreme brightness. But when they studied the spectrum of their light, they were stunned to find the “stars” were more than a billion light-years away. When light travels through space, it gets stretched thanks to the constant expansion of the universe. As it moves, it shifts toward redder, longer wavelengths. Astronomers can measure this “redshift” to figure out how long the light took to reach Earth, which indicates how far a certain object is. Schmidt and his fellow astronomers knew that for stars to appear so luminous to Earth from such great distances was impossible. They were dealing with completely new phenomena.
“They’re not something that anyone predicted at all,” Mortlock said. “Occasionally you get astronomical objects like [stars known as] brown dwarfs, where people had predicted that they would exist and waited for astronomy to find them. No one predicted anything like quasars. It’s one of those cases where our imaginations weren’t up to what nature turned out to provide.”
To find the latest record-breaking quasar, Bañados and his colleagues used computer algorithms to search through databases of large sky surveys. They selected points of light they suspected could turn out to be quasars and observed them with the telescopes at Las Campanas Observatory in Chile. One night in March of this year, they all gathered to look at the data, one quasar candidate at a time. Quasars, astronomers have found, are easily recognizable when raw data is plotted on a chart. The spectrum of a quasar—a plot of brightness against the wavelength of light—has a very distinctive shape. Features known as emission lines appear broad, rather than sharp, thanks to the Doppler effect, which means the object emitting the light it traveling at high speeds.
“These objects are so bright that basically in 10 minutes, I can know from the raw data if it’s a quasar or not,” Bañados said. They found a quasar in their search, and when they calculated its distance from Earth, they couldn’t believe what they’d found. The next day, Bañados started drafting proposals to get observation time on powerful telescopes around the world to further study this quasar.
From the data for the quasar, astronomers can infer the size of the black hole responsible for powering it. “To get a bright quasar like this, you have to build up a supermassive black hole,” Mortlock said.
Astronomers studied the galaxy where the black hole and its quasar reside using radio telescopes in the French Alps and New Mexico. They found that the galaxy, at a mere 690 million years, had “already formed an enormous amount of dust and heavy chemical elements. This means it must already have formed a large amount of stars.” Astronomers say they’ll need to rethink some existing models for the evolution of galaxies to explain how a young galaxy could accumulate so much matter so fast. The findings about the galaxy are published in a separate study in the Astrophysical Journal Letters.
Quasars are some of the best targets for studying the early universe. Like flashlights, they illuminate a cosmic time astronomers are still struggling to understand. The newly discovered quasar comes from a period in the universe’s history know as “the epoch of re-ionization,” when a mysterious source of radiation ionized hydrogen and transformed the gas in the universe from an indiscernible fog into something transparent. About this time, the first objects to radiate light also formed. The exact process, as well as which phenomenon happened first, remains poorly understood.
Mortlock said he feels some sense of ownership of the quasar he discovered, which is now the second-farthest ever spotted. To feel that way about an object billions of light-years away is “completely ridiculous,” he said with a laugh. “And it’s especially ridiculous because there was no way that the object we discovered was going to be the end of this process. As we get more data and observe larger areas of the sky and look more deeply, we’re always going to find more objects like this.”
Someday, Bañados’s discovery will be relegated to second place, too. “There must be more out there, especially fainter ones,” Bañados said. “I’m still searching for them.”
We want to hear what you think about this article. Submit a letter to the editor or write to [email protected]. | 0.826114 | 3.999493 |
Transit of Venus
All being in readiness, a little after 2 p.m. the gate was locked and silence enjoined. Every observer went to his station."
G.L. Tupman, Honolulu Journal, RGO 59/70
This collection of material relates to the British expeditions of 1874 to observe the rare astronomical phenomenon of the transit of Venus. Occurring in pairs over a century apart, transits of Venus had previously been observed in 1639, 1761 and 1769. The 18th- and 19th-century transits were marked by the efforts of many individuals and institutions across Europe and America to carry out observations. It was often necessary to make special expeditions, in order to reach locations from which the event could be seen and places widely separated on Earth. By making and timing near-simultaneous observations from precisely located observing stations across the globe, astronomers hoped to measure solar parallax. This was a means of establishing the distance between the Earth and the Sun (a distance now known as the Astronomical Unit) and, thus, knowledge of the real rather than relative scale of the Solar System. It was hoped and claimed that more accurate knowledge of the sizes and distances of the heavenly bodies would improve astronomical and navigational tables, such as the Nautical Almanac. These efforts of expeditionary astronomy drew on and fed into European interests in expanding trading opportunities and imperial and military influence.
Britain’ s Royal Society, Royal Observatory and Royal and Merchant Navies had all been involved in the 18th-century transits. In 1761 Nevil Maskelyne voyaged to St Helena to observe the transit and, as Astronomer Royal, had led the British effort to observe the 1769 transit. The most famous expedition for this year was that led by James Cook in the Endeavour, which stopped to observe in Tahiti before heading on to explore the southern seas. The results of these and other expeditions were drawn together to provide a new figure for the distance to the Sun, although the results were less reliable and comparable than had been hoped. Astronomers and observatories were primed to try again for 1874 and 1882, this time with the added technology of photography and considerably simpler transport options.
The British expeditions of 1874 were sent to Egypt, the Sandwich Islands (Hawai‘i), Rodriguez, Christchurch (New Zealand) and Kerguelen. This collection has material that relates to the organisation of all of these expeditions but particularly to that of Station B, the Sandwich Islands. The central individual around whom the material has been selected is Captain George Lyon Tupman of the Royal Marine Artillery (1838-1922): the chief organiser of the British effort both before and after the transit itself (on 8/9 December 1874) and chief astronomer for Station B. This collection brings together a selection of material from Tupman’s papers in the archive of the Royal Observatory, Greenwich (RGO/59), from the papers of the Astronomer Royal, George Airy (RGO/6), and from the private collection of Tupman’s descendants, including two albums of caricature drawings following “The Life and Adventures of Station B”, by one of the seven observers, Lieutenant E.J.W. Noble.
Several of the records offer different viewpoints on the same events. Noble’s caricatures record people and occasions that we can also find in the official journals and correspondence. Tupman’s private journal overlaps and contrasts with the caricatures and his own official accounts. The instruments from the several expeditions, many of which survive in the collections of the National Maritime Museum, can be tracked through photographs, lists and records of their use and movement. It is, however, Noble’s caricatures that give us the most unique and lively view of the expedition. We get to know the observers, their work, their frustrations and their entertainments. They, the instruments and some locations are drawn so as to be recognisable. The observers are well introduced at the top of this image, which below shows the complexities of sea travel, particularly when your baggage consists of tonnes of specialist equipment. We share the team’s travels, triumphs and trials, and can assume that they too had shared the caricatures, as a way of bonding and dealing with the annoyances of environment, equipment, “incessant visitors” and each other. For Noble, a career soldier, they seem to have served the purpose of shoring up his temporary identity as an astronomer. Ultimately, they were saved as a souvenir for and by Tupman.
School of History
University of Kent
The 'splash' icons found in the descriptions are links to relevant items in the online collections of the Royal Museums Greenwich
Tupman and George Airy published the official Account of the Observations of the Transit of Venus, 1874, December 8: made under the authority of the British Government in 1881. Michael Chauvin has written two accounts of the Hawai‘i expedition: Hōkūloa: The British 1874 Transit of Venus Expedition to Hawai'i (2004) and Astronomy in the Sandwich Islands (1993). Jessica Ratcliffe's The Transit of Venus Enterprise in Victorian Britain (2008) looks at the effort behind all of the British expeditions. | 0.81383 | 3.626801 |
The ESA (European Space Agency) and NASA mission SOHO -- short for Solar and Heliospheric Observatory -- got a visit from an old friend this week when comet 96P entered its field of view on Oct. 25, 2017. The comet entered the lower right corner of SOHO's view, and skirted up and around the right edge before leaving on Oct. 30. SOHO also spotted comet 96P in 1996, 2002, 2007 and 2012, making it the spacecraft's most frequent cometary visitor.
At the same time, comet 96P passed through a second NASA mission's view: STEREO -- short for Solar and Terrestrial Relations Observatory -- also watched the comet between Oct. 26-28, from the opposite side of Earth's orbit.
It is extremely rare for comets to be seen simultaneously from two different locations in space, and these are the most comprehensive parallel observations of comet 96P yet. Scientists are eager to use these combined observations to learn more about the comet's composition, as well as its interaction with the solar wind, the constant flow of charged particles from the Sun.
Both missions gathered polarization measurements of the comet; these are measurements of sunlight in which all the light waves become oriented the same way after passing through a medium -- in this case, particles in the tail of the comet. By pooling the polarization data together, scientists can extract details on the particles that the light passed through.
"Polarization is a strong function of the viewing geometry, and getting multiple measurements at the same time could potentially give useful information about the composition and size distribution of the tail particles," said William Thompson, STEREO chief observer at NASA's Goddard Space Flight Center in Greenbelt, Maryland.
Comet 96P -- also known as comet Machholz, for amateur astronomer Dan Machholz's 1986 discovery of the comet -- completes an orbit around the Sun every 5.24 years. It makes its closest approach to the Sun at a toasty 11 million miles -- a very close distance for a comet.
When comet 96P appeared in SOHO's view in 2012, amateur astronomers studying the SOHO data discovered two tiny comet fragments some distance ahead of the main body, signaling the comet was actively changing. This time around they have detected a third fragment -- another breadcrumb in the trail that indicates the comet is still evolving.
Scientists find comet 96P interesting because it has an unusual composition and is the parent of a large, diverse family, referring to a group of comets sharing a common orbit and originating from a much larger parent comet that over millennia, broke up into smaller fragments.
Comet 96P is the parent of two separate comet groups, both of which were discovered by citizen scientists studying SOHO data, as well as a number of Earth-crossing meteor streams. By studying the comet's ongoing evolution, scientists can learn more about the nature and origins of this complex family.
Lina Tran | EurekAlert!
New gravitational-wave model can bring neutron stars into even sharper focus
22.05.2020 | University of Birmingham
Electrons break rotational symmetry in exotic low-temp superconductor
20.05.2020 | DOE/Brookhaven National Laboratory
Thomas Heine, Professor of Theoretical Chemistry at TU Dresden, together with his team, first predicted a topological 2D polymer in 2019. Only one year later, an international team led by Italian researchers was able to synthesize these materials and experimentally prove their topological properties. For the renowned journal Nature Materials, this was the occasion to invite Thomas Heine to a News and Views article, which was published this week. Under the title "Making 2D Topological Polymers a reality" Prof. Heine describes how his theory became a reality.
Ultrathin materials are extremely interesting as building blocks for next generation nano electronic devices, as it is much easier to make circuits and other...
Scientists took a leukocyte as the blueprint and developed a microrobot that has the size, shape and moving capabilities of a white blood cell. Simulating a blood vessel in a laboratory setting, they succeeded in magnetically navigating the ball-shaped microroller through this dynamic and dense environment. The drug-delivery vehicle withstood the simulated blood flow, pushing the developments in targeted drug delivery a step further: inside the body, there is no better access route to all tissues and organs than the circulatory system. A robot that could actually travel through this finely woven web would revolutionize the minimally-invasive treatment of illnesses.
A team of scientists from the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart invented a tiny microrobot that resembles a white blood cell...
By studying the chemical elements on Mars today -- including carbon and oxygen -- scientists can work backwards to piece together the history of a planet that once had the conditions necessary to support life.
Weaving this story, element by element, from roughly 140 million miles (225 million kilometers) away is a painstaking process. But scientists aren't the type...
Study co-led by Berkeley Lab reveals how wavelike plasmons could power up a new class of sensing and photochemical technologies at the nanoscale
Wavelike, collective oscillations of electrons known as "plasmons" are very important for determining the optical and electronic properties of metals.
Proteins, the microscopic “workhorses” that perform all the functions essential to life, are team players: in order to do their job, they often need to assemble into precise structures called protein complexes. These complexes, however, can be dynamic and short-lived, with proteins coming together but disbanding soon after.
In a new paper published in PNAS, researchers from the Max Planck Institute for Dynamics and Self-Organization, the University of Oxford, and Sorbonne...
19.05.2020 | Event News
07.04.2020 | Event News
06.04.2020 | Event News
22.05.2020 | Physics and Astronomy
22.05.2020 | Materials Sciences
22.05.2020 | Materials Sciences | 0.846227 | 3.878421 |
Time now to change focus from the COVID-19 virus, Dotard and the Dem campaign to a natural calamity in deep, deep space. We now know, thanks to a recent paper published in The Astrophysical Journal., that the biggest cosmic explosion on record has been detected. This is an event so powerful that it punched a dent the size of 15 Milky Ways in the surrounding space. The eruption is thought to have originated at a supermassive black hole in the Ophiuchus galaxy cluster, which is about 390m light years from Earth.
Lead author Simona Giacintucci, of the Naval Research Laboratory in Washington DC, described the blast as an astronomical version of the eruption of Mount St. Helens in 1980, which ripped off the top of the volcano. She explained:
“A key difference is that you could fit 15 Milky Way galaxies in a row into the crater this eruption punched into the cluster’s hot gas,”
Galaxy clusters are among the largest structures in the universe, containing thousands of individual galaxies, dark matter and hot gas. E.g.
At the heart of the Ophiuchus cluster we know there is a large galaxy that contains a supermassive black hole with a mass equivalent to 10 million suns. That would be:
10 x 1.99 x 10 30 kg = 19.9 x 10 30 kg
Although black holes are known as 'sinkholes' that consume anything that drifts too close, they also expel prodigious amounts of material and energy. These jets occur when a disk of plasma accretes around the central black hole. An artist's conception of one such jet is shown below:
When the inward flow of material reaches a certain limit, a proportion escapes being swallowed by the black hole and is redirected into plasma jets that blast out in two perpendicular beams at close to the speed of light.
In the case of the Ophiuchus object, astrophysicists conjecture a jet would have traveled in a narrow beam for a certain distance, then hit something in space, which caused the beam to explode outwards in a burst of radio emissions. Maxim Markevitch, of NASA’s Goddard Space Flight Center in Greenbelt, Maryland, a co-author of the paper, compared the process to a stream of air travelling down a drinking straw and then turning into a bubble at the end of the straw.
The first hints of the giant explosion were spotted by NASA’s Chandra X-ray Observatory in 2016, which showed an unusual concave edge in the Ophiuchus galaxy cluster. However, at the time the possibility of this being caused by an explosion was discounted due to the huge amount of energy required to create such a large cavity.
The latest observations combined data from Chandra and ESA’s XMM-Newton space observatory and radio data from the Murchison Widefield Array (MWA) in Australia and the Giant Metrewave Radio Telescope (GMRT) in India to provide compelling new evidence for the gigantic explosion.
The observations confirm the presence of the curved edge and also reveal a huge patch of radio emissions tightly bordering the curve, which would correspond to the expected bubble. “This is the clincher that tells us an eruption of unprecedented size occurred here,” said Markevitch.
Astronomers think the observed explosion may have occurred due to a spike in supply of ambient gas to the black hole, perhaps occurring when a galaxy fell into the center of the cluster.
The amount of energy required to create the cavity in Ophiuchus is about five times greater than the previous record holder, an event in a galaxy cluster called MS 0735.6+7421, and hundreds and thousands of times greater than typical clusters.
You can view a simulation of this violent event, in the constellation Ophiuchus, here: | 0.819729 | 3.815223 |
NASA and the European Space Agency (ESA) launched a spectacular image of Messier 90 in the constellation of Virgo, a beautiful spiral galaxy located about 60 million light years from our own Milky Way.
This galaxy is particularly interesting for astronomers because it is one of the few that have been observed traveling to the Milky Way, not far from it.
Scientists know it because the light that shines from the galaxy is "blueshifted."
Essentially, this means that the wavelength of this light is compressed as it gets closer to us, which decreases the wavelength and pushes it towards the blue end of the visible spectrum.
This change of blues contrasts with the way we see most of the galaxies in the universe. Because space is constantly expanding, the vast majority of the galaxies we see are moving away from us. As a result, its light is "redshifted", which means that it suffers an increase in the wavelength that pushes it towards the red end of the spectrum.
Messier 90 is part of Virgo Cluster, a vast collection of galaxies with more than 1,200 known members. It is the largest cluster of galaxies closest to the Milky Way.
Scientists believe that the peculiar direction of Messier 90's journey could be explained by the enormous mbad of the group in which he resides. This mbad would be able to accelerate the individual galaxies at high speeds, sending them in strange orbits that can move them away from us over time.
In general, the Virgo Group is moving away from us, and many of the galaxies within it seem to travel at very high speeds in this direction. However, some of its galaxies, such as Messier 90, are moving at fast speeds in the opposite direction, so they appear to be moving toward Earth.
The image of Messier 90 was captured by the Wide Field and the Planetary Camera 2 of Hubble, which collects a combination of infrared, ultraviolet and visible light and was operative between 1994 and 2010.
Hubble, which is jointly operated by NASA and ESA, was launched in 1990, and has since been responsible for producing some of the most dramatic images in our universe. While it is not the first space observatory to be launched, it is one of the largest and most versatile still used. | 0.86171 | 3.646787 |
Astronomers using the NASA / ESA Hubble Space Telescope have observed an unexpected thin disk of material surrounding a super effective dungeon in the heart of the spiral galaxy NGC 3147, located 130 million light-years away.
The presence of the black hole disk in an active galaxy of low luminosity surprises astronomers. Black holes in certain types of galaxies such as NGC 3147 are considered hungry since there is not enough gravitationally captured material to feed them regularly. Therefore, it is puzzling that there is a thin disk surrounding a hungry black hole that imitates the much larger discs found in extremely active galaxies.
Of particular interest, this disk of material surrounding the black hole offers a unique opportunity to test Albert Einstein's theories of relativity. The disc is so deeply embedded in the intense gravitational field of the black hole that the light of the gas disk is altered, according to these theories, giving astronomers a unique look at the dynamic processes near a black hole.
"We have never seen the effects of general and special relativity in visible light so clearly," said team member Marco Chiaberge of AURA for ESA, STScI and Johns Hopkins University.
Hubble measured that the disk material rotated around the black hole at more than 10% of the speed of light. At such extreme speeds, the gas seems to shine as it travels toward the Earth on the one hand, and attenuates as it moves away from our planet on the other. This effect is known as relativistic emission. The Hubble observations also show that the gas is embedded so deeply in a gravitational well that the light is struggling to escape, and therefore seems stretched at redder wavelengths. The mbad of the black hole is about 250 million times that of the Sun.
"This is an intriguing look at a disk very close to a black hole, so close that the velocities and intensity of the gravitational force affect the way we see photons of light," explained the study's first author, Stefano Bianchi, of Università. degli Studi Roma Tre in Italy.
The artistic impression of the peculiar and thin disc of material that surrounds a super mbadive black hole in the heart of the spiral galaxy NGC 3147, located 130 million light-years away. Credit: ESA / Hubble, M. Kornmesser
To study the swirling matter deep within this disc, the researchers used the Hubble Space Telescope's (STIS) Space Imaging Spectrograph instrument. This diagnostic tool divides the light of an object into its many individual wavelengths to determine the speed, temperature and other characteristics of the object with very high precision. STIS was essential to effectively observe the region of low luminosity around the black hole, blocking the bright light of the galaxy.
Astronomers initially selected this galaxy to validate accepted models of low-luminosity active galaxies: those with malnourished black holes. These models predict that discs of material must form when a large amount of gas is trapped by the strong gravitational pull of a black hole, which then emits a lot of light and produces a bright light called quasar.
Top-down view of an artist's impression of the peculiar, thin disk of material that surrounds a supermbadive black hole in the heart of the spiral galaxy NGC 3147, located 130 million light-years away. Credit: ESA / Hubble, M. Kornmesser
"The type of disc we see is a small quasar that we did not expect to exist," Bianchi explained. "It's the same kind of disk we see in objects that are 1000 or even 100 000 times brighter." The predictions of current models for very weak active galaxies clearly failed. "
The team hopes to use Hubble to search for other very compact discs around low-luminosity black holes in similar active galaxies.
Publication: Stefano Bianchi, et al., "HST reveals a compact and relatively relativistic broad-line region in the true candidate type 2 NGC 3147", MNRAS, 2019; doi: 10.1093 / mnrasl / slz080 | 0.862839 | 4.146266 |
Scientists probe the limits of ice
UTAH: How small is the smallest possible particle of ice? It’s not a snowflake, measuring at a whopping fraction of an inch. According to new research published in Proceedings of the National Academy of Sciences, the smallest nanodroplet of water in which ice can form is only as big as 90 water molecules—a tenth the size of the smallest virus. At those small scales, according to University of Utah chemistry professor and study co-author Valeria Molinero, the transition between ice and water gets a little frizzy.
“When you have a glass of water with ice, you do not see the water in the glass turn all ice and all liquid as a function of time,” she says. In the smallest water nanodroplets, she says, that’s exactly what happens.
Why “ice I” matters
The transition between water and ice is among the most important transformations between phases (solids, liquids and gases) on our planet, where it has unique effects on our climate while also regulating the viability of life. Understanding the conditions that lead to the formation of ice, then, is an active quest in areas that encompass environmental and earth sciences, physics, chemistry, biology and engineering.
Ice exists on Earth almost exclusively in the highly ordered hexagonal crystal structure known as “ice I.” In our atmosphere, small water clusters form and subsequently freeze, seeding larger crystals and eventually clouds. Due to competing thermodynamic effects, however, below a certain diameter these water clusters cannot form thermodynamically stable ice I. The exact size range of water clusters capable of forming stable ice I has been investigated through experiment and theory for years with most recent estimates narrowing the range from as low as 90 water molecules to as high as 400.
Supercooling: Low and slow
In the past, a major barrier in experimentally studying this limit has been cooling the supercooled liquid clusters slow enough to allow the ice I lattice to form properly. Cooling too quickly creates clusters of amorphous ice, a less ordered phase. If the clusters are not cooled slowly and uniformly, the result is an unnatural combination of ice phases. Computer simulations of ice formation also face their own challenges in replicating nanoscale physics and ice formation.
In the new study, researchers at the University of Utah, the University of California, San Diego, the Universität Göttingen, the Max Planck Institutes for Solar System Research and Dynamics and Self-Organization in Göttingen combine recent advances in simulation and experiment to disentangle the interplay between the constraints that act on the ice-liquid transition in nanometer-sized clusters.
To overcome the cooling problem, the Göttingen team used a molecular beam that generates clusters of a desired size by initially expanding a mixture of water and argon through a roughly 60 micrometer diameter nozzle. The resulting beam is then funnelled through three distinct zones where the cooling rate is dropped in order to control the formation of the clusters, reaching a low temperature of 150 K (-123 °C or -189 °F). Computer models of water developed by the San Diego and Utah teams were used to simulate the properties of the nanodroplets.
The end of ice
Using infrared spectroscopic signatures to monitor the transition to ice I in the clusters, the researchers found promising agreement between the experimental and theoretical approaches. The results provide strong evidence that the “end of ice” occurs when clusters are around 90 water molecules. At this size, the clusters are only around 2 nanometers in diameter, or roughly one million times smaller than a typical snowflake.
Francesco Paesani at the University of California, San Diego explains, “This work connects in a consistent manner experimental and theoretical concepts for studying microscopic water properties of the past three decades, which now can be seen in a common perspective.”
Unexpectedly, the researchers found in both simulation and experiment that the coexistence of ice behaves differently in clusters from 90 to 150 water molecules from the sharp, well-defined melting transition we experience with macroscopic (large-scale) ice and water occurring at 0° C. The clusters were found to instead transition over a range of temperatures and oscillate in time between the liquid and ice states, an effect of their small size that was first predicted three decades ago, but lacked experimental evidence until now.
Thomas Zeuch of the Universität Göttingen notes, “Macroscopic systems have no analogous mechanism; water is either liquid or solid. This oscillating behavior seems unique to clusters in this size and temperature range.”
“There is nothing like these oscillations in our experience of phase coexistence in the macroscopic world!” Molinero adds. In a glass of water, she says, both the ice and water are stable and can coexist, regardless of the size of the ice chunks. But in a nanodroplet that contains both liquid and ice, most of the water molecules would be at the interface between ice and water—so the entire two-phase cluster becomes unstable and oscillates between a solid and a liquid.
When ice gets weird
Water clusters of the sizes and temperatures in the experiment are common in interstellar objects and in planetary atmospheres, including our own, Molinero says. They also exist in the mesosphere, an atmospheric layer above the stratosphere.
“They can also exist as pockets of water in a matrix of a material, including in cavities of proteins,” she says.
If the oscillatory transitions could be controlled, Molinero says, they could conceivably form the basis of a nano valve that allows the passage of materials when a liquid and stops the flow when a solid.
The results go beyond just ice and water. Molinero says that the small-scale phenomena should happen for any substance at the same scales. “In that sense,” she says, “our work goes beyond water and looks more generally to the coda of a phase transition, how it transforms from sharp to oscillatory and then the phases themselves disappear and the system behaves as a large molecule.” | 0.851054 | 3.425102 |
The CORrelation-RAdial-VELocities (CORAVEL) instrument was a Cassegrain spectrometer for determining stellar radial velocities with very high time resolution and high accuracy. The instrument was mounted on the Danish 1.54-metre telescope at La Silla Observatory in 1981 and was operated by the Geneva Observatory and then shared with Danish and ESO astronomers. CORAVEL was mounted on the Danish 1.54-metre telescope because it required a rather large telescope (1.5 metre diameter or larger), allowing fast automatic pointing and this telescope was the only one adapted to receive it.
The project for building CORAVEL started in 1971 from a collaboration between the Marseille Observatory and the Geneva Observatory and was financed by the Swiss National Foundation for Scientific Research and the French National Institute for Astrophysics and Geophysics. Two CORAVEL instruments were made, the first being mounted on the 1-metre Swiss telescope at the Haute-Provence Observatory in 1977. With the second CORAVEL mounted on the Danish 1.54-metre telescope a natural collaboration between ESO astronomers and astronomers from several institutes and observatories in Denmark, Switzerland and France was born.
As a radial velocity scanner, CORAVEL worked by comparing a stellar spectrum with a reference spectrum, providing for that period exceptional accuracy (250 m/s) and acquisition speed (measurements took less than 5 minutes). The reference was a template with about 3000 holes matching the absorption lines of Arcturus and that was located in the focal plane of the spectrograph. The stellar velocity was derived from the how well the stellar spectrum matched the template, with the zero point of the velocity determined by the spectrum of iron produced by a hollow-cathode lamp.
During their more than 20 years of operation the two CORAVEL spectrometers measured hundreds of thousands of radial velocities in thousands of stars, most of them part of surveys conducted in Chile and France. These surveys included for example: a systematic study of the multiplicity of solar-type stars, a comprehensive study of the chemical properties and movements of 14 000 F- and G-type stars, and studies of the motion of globular clusters and stars in open clusters.
CORAVEL in La Silla was decommissioned in January 1998 and eventually returned to Switzerland.
This table lists the global capabilities of the instrument. | 0.81552 | 3.586978 |
CAPE CANAVERAL, Fla. (AP) — NASA’s sun-skimming spacecraft, the Parker Solar Probe, is surprising scientists with its unprecedented close views of our star.
Scientists released the first results from the mission Wednesday. They observed bursts of energetic particles never seen before on such a small scale as well as switchback-like reversals in the out-flowing solar magnetic field that seem to whip up the solar wind.
NASA’s Nicola Fox compared this unexpected switchback phenomenon to the cracking of a whip.
“They’re striking and it’s hard to not think that they’re somehow important in the whole problem,” said Stuart Bale of the University of California, Berkeley, who was part of the team.
Researchers said they also finally have evidence of a dust-free zone encircling the sun. Farther out, there’s so much dust from vaporizing comets and asteroids that one of 80 small viewfinders on one instrument was pierced by a grain earlier this year.
“I can’t say that we don’t worry about the spacecraft. I mean, the spacecraft is going through an environment that we’ve never been before,” Fox said.
Launched in 2018, Parker has come within 15 million miles (25 million kilometers) of the sun and will get increasingly closer — within 4 million miles (6 million kilometers) — over the next six years. It’s completed three of 24 orbits of the sun, dipping well into the corona, or upper atmosphere. The goal of the mission is to shed light on some of the mysteries surrounding the sun.
Parker will sweep past Venus on Dec. 26 for the second gravity-assist of the $1.5 billion mission and make its fourth close solar encounter in January.
The findings in the journal Nature were made during a relatively quiet phase of solar activity.
“We’re just starting to scratch the surface of this fascinating physics,” said Princeton University’s David McComas, the chief scientist of one of the spacecraft’s instruments.
As Parker gets even closer to its target, the sun will go through an active phase “so we can expect even more exciting results soon,” University College London’s Daniel Verscharen wrote in an accompanying editorial. Verscharen was not part of the mission.
Over the summer, Fox shared these early results with solar astrophysicist Eugene Parker, 92, professor emeritus at the University of Chicago for whom the spacecraft is named. He expressed excitement — “wow” — and was keen to be involved.
It’s the first NASA spacecraft to be named after a person still alive. Parker attended its launch last year from Cape Canaveral.
The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education. The AP is solely responsible for all content. | 0.817599 | 3.390738 |
Colliding Galaxy Pair
NASA – This striking NASA Hubble Space Telescope image, which shows what looks like the profile of a celestial bird, belies the fact that close encounters between galaxies are a messy business.
Once part of a flat, spiral disk, the orbits of the galaxy’s stars have become scrambled due to gravitational tidal interactions with the other galaxy. This warps the galaxy’s orderly spiral, and interstellar gas is strewn out into giant tails like stretched taffy.
Gas and dust drawn from the heart of NGC 2936 becomes compressed during the encounter, which in turn triggers star formation. These bluish knots are visible along the distorted arms that are closest to the companion elliptical. The reddish dust, once within the galaxy, has been thrown out of the galaxy’s plane and into dark veins that are silhouetted against the bright starlight from what is left of the nucleus and disk.
The companion elliptical, NGC 2937, is a puffball of stars with little gas or dust present. The stars contained within the galaxy are mostly old, as evidenced by their reddish color. There are no blue stars that would be evidence of recent star formation. While the orbits of this elliptical’s stars may be altered by the encounter, it’s not apparent that the gravitational pull by its neighboring galaxy is having much of an effect.
Image Credit: NASA/ESA/Hubble Heritage Team | 0.83709 | 3.4534 |
Beta Lyrae variables are a class of close binary stars. Their total brightness is variable because the two component stars orbit each other, and in this orbit one component periodically passes in front of the other one, thereby blocking its light. The two component stars of Beta Lyrae systems are quite heavy (several solar masses each) and extended (giants or supergiants). They are so close, that their shapes are heavily distorted by mutual gravitation forces: the stars have ellipsoidal shapes, and there are extensive mass flows from one component to the other.
These mass flows occur because one of the stars, in the course of its evolution, has become a giant or supergiant. Such extended stars easily lose mass, just because they are so large: gravitation at their surface is weak, so gas easily escapes (the so-called stellar wind). In close binary systems such as beta Lyrae systems, a second effect reinforces this mass loss: when a giant star swells, it may reach its Roche limit, that is, a mathematical surface surrounding the two components of a binary star where matter may freely flow from one component to the other.
In binary stars the heaviest star generally is the first to evolve into a giant or supergiant. Calculations show that its mass loss then will become so large that in a comparatively very short time (less than half a million years) this star, that was once the heaviest, now becomes the lighter of the two components. Part of its mass is transferred to the companion star, the rest is lost in space.
The light curves of beta Lyrae variables are quite smooth: eclipses start and end so gradually that the exact moments are impossible to tell. This is because the flow of mass between the components is so large that it envelopes the whole system in a common atmosphere. The amplitude of the brightness variations is in most cases less than one magnitude; the largest amplitude known is 2.3 magnitudes (V480 Lyrae).
The period of the brightness variations is very regular. It is determined by the revolution period of the binary, the time it takes for the two components to once orbit around each other. These periods are short, typically one or a few days. The shortest known period is 0.29 days (QY Hydrae); the longest is 198.5 days (W Crucis). In beta Lyrae systems with periods longer than 100 days one of the components generally is a supergiant.
Beta Lyrae systems are sometimes considered to be a subtype of the Algol variables; however, their light curves are different (the eclipses of Algol variables are much more sharply defined). On the other hand, beta Lyrae variables look a bit like W Ursae Majoris variables; however, the latter are in general yet closer binaries (so-called contact binaries), and their component stars are mostly lighter than the beta Lyrae system components (about one solar mass).
Examples of β Lyrae stars
The prototype of the β Lyrae type variable stars is β Lyrae, also called Sheliak. Its variability was discovered in 1784 by John Goodricke.
Nearly a thousand β Lyrae binaries are known: the latest edition of the General Catalogue of Variable Stars (2003) lists 835 of them (2.2% of all variable stars). Data for the ten brightest β Lyrae variables are given below. (See also the list of known variable stars.)
Retrieved from "http://en.wikipedia.org/" | 0.843951 | 4.172206 |
It’s one of the harshest places on Earth — and travelers love it - CNBC
The Danakil Depression is so other-worldly that scientists use it to study the possibility of life on other planets.
© Pascal Boegli
It’s been called one of the most alien places on earth — a “gateway to hell” and, in the words of British explorer Wilfred Thesiger, a veritable “land of death.”
The sulfurous hot springs, acid pools, steaming fissures and salt mountains of the Danakil Depression resemble scenes from a science fiction movie. But the area is very real — and it’s one of Ethiopia’s top attractions.
One of the hottest places on earth (by average daily temperature) as well as one of the lowest (over 400 feet below sea level), the Danakil Depression entices three main types of people to the area: salt miners, scientists and travelers.
As they have done for centuries, miners travel hours — often by camel caravans — to extract salt slabs from the flat pans around Lake Afar. Salt is the region’s “white gold” and was a form of currency in Ethiopia until the 20th century.
A camel caravan, with salt mined by hand, travels across a salt plain in the Danakil Depression. Carl Court
Scientists are attracted to the conditions. In the 1960s, the area was used to study plate tectonics, but more recently astrobiological exploration is the larger scientific draw.
In the spring of 2016, researchers from the University of Bologna, Italy’s International Research School of Planetary Sciences and Ethiopia’s Mekelle University studied whether microbes can withstand Danakil’s scorchingly inhospitable environment (it turned out they can). Scientists wonder whether if extremophiles, as they are known, can survive there, they can survive on Mars too.
Erta Ale’s interior lava lake is one of only eight in the world. SeppFriedhuber
Travelers are lured to the Danakil Depression for an altogether different reason. It’s a sweltering, foul-smelling, punitive place, which is exactly why people cross continents to see it. Despite its intensity, those who make the trek give it stellar reviews.
The sulfur springs of Dallol are a particular draw, with its stupefying shades of neon green and yellow that hiss forth from the rocky terrain. Ethiopia’s most active volcano, Erta Ale (which means “Smoking Mountain” in the local Afar language) is another, with its cartoon-like molten center, one of only eight lava lakes in the world.
How the Danakil Depression formed
Danakil is part of the Afar Triangle, a geological depression in the remote northeastern part of Ethiopia, where three tectonic plates are slowing diverging. The area is large — 124 miles by 31 miles — and was once part of the Red Sea. Over time, volcanic eruptions spewed enough lava to eventually seal off an inland sea which evaporated in the arid climate.
The Danakil Depression really is one of the most incredible natural wonders in the world.
ETHIOPIAN TOUR GUIDE
The Earth Observatory at NASA Goddard Space Flight Center predicts that because the land is slowly sinking, it will one day fill with ocean water again. This could be millions of years in the future though.
What it’s like to visit
It’s blistering hot. Daily temperatures are around 94 F (34.4 C), but can reach as high as 122 F (50 C), and rainfall is scarce.
Day trips departing from the town of Wikro typically start around 4 a.m. From there, a four-wheel drive convoy embarks on a three-hour journey down a windy mountainous road across the Ethiopian portion of the Great Rift Valley.
Helicopter rides are also available for a speedier route.
“From up above, you can cover a lot of ground smoothly and for a second, it feels as if you’re truly on another planet,” said Henok Tsegaye, a guide who leads luxury Ethiopian tours for Jacada Travel.
The water of Lake Karum is visible through a hole in the Danakil Depression salt flats. Edwin Remsberg
“The Danakil Depression really is one of the most incredible natural wonders in the world. It is one of the most alien places on earth, and with Ethiopia growing in popularity as a destination, we are seeing more and more travelers,” said Tsegaye.
It’s common to see dead insects and birds around the perimeter of Danakil’s sulfur springs, which Tsegaye said is likely caused by drinking the water or inhaling too much of the carbon dioxide-rich air. It’s also the reason the springs have been dubbed “killer lakes.”
From fall foliage trips to cherry blossom season, nature’s changing colors have long been a catalyst for travels around the globe. But here it’s the variety of natural hues, set among a vast sea of nothingness, that truly amazes.
In the hottest, most acidic pools, sulfur and salt create a more neon yellow shade, while cooler copper-laced pools are more turquoise in color. Eric Lafforgue/Art in All of Us
“The mixture of yellow, orange, red, blue and green are due to the rain and sea water from the nearby coasts that seep through into the sulfuric lakes and get heated up by the magma,” said Tsegaye. “As the salt from the sea reacts with the minerals in the magma, these dazzling colors begin to emerge.”
As the heat evaporates the water, colorful crust-like deposits develop across the land, which mix mystically with the cooler turquoise lakes in the depression.
Is it safe to see the Danakil Depression?
Compared with the hydrothermal zones in Yellowstone National Park, Danakil is hotter and more acidic. Danakil’s springs are around 212 F (100 C) and have an average pH of 0.2. Compared with the average pH of lemon juice (2.4) or battery acid (1.0), it’s easy to see why dipping a finger in the bubbling pool isn’t advised.
Proper footwear and a guide are essential.
“When walking on geothermal areas, you must be careful. The salt crust is unstable, delicate and fragile,” said Tsegaye. “You need to know where to go and exactly where to step.”
Henok Tsegaye (center). Courtesy of Jacada Travel
“Previously, there was tension nearby the border of Eritrea, so they wield machine guns for your protection only,” said Tsegaye.
When to go
The Danakil Depression’s high season runs from November to March, when temperatures — though still in the 90s F — are slightly more bearable.
Salt, copper and cobalt create the ever-changing colors of the Dallol landscape. F.Luise
Low season is from June to August, which Tsegaye described as “unsuitable.” Tours still operate, but the experience is often more arduous than pleasurable, he said.
“It is truly one of the most alien places on earth. It is one of the most remote, unlivable, hottest, lowest points on the planet,” said Tsegaye. “Sounds a bit scary, but that’s what makes it so fascinating as well.” | 0.83389 | 3.095459 |
NASA’s Innovative Advanced Concepts program is all about making high-risk, high-reward bets on unique — and sometimes eyebrow-raising — ideas for space exploration and observation. This year’s grants total $7 million and include one of the most realistic projects yet. It might even get made!
NIAC awards are split into three tiers: Phase I, II and III. Roughly speaking, the first are given $125,000 and nine months basically to show their concept isn’t bunk. The second are given $500,000 and two years to show how it might actually work. And the third get $2 million to develop the concept into a real project.
It speaks to the, shall we say, open-mindedness of the NIAC program that until this year there have only been two total recipients of Phase III awards, the rest having fallen by the wayside as impractical or theoretically unsound. This year brings the third, a project at NASA’s Jet Propulsion Laboratory we first noted among 2018’s NIAC selections.
The “solar gravitational lens” project involves observing the way light coming from distant explanets is bent around our sun. The result, as the team has spent the last two years reinforcing the theory behind it, is the ability to create high-resolution images of extremely distant and dark objects. So instead of having a single pixel or two showing us a planet in a neighboring star system, we could get a million pixels — an incredibly detailed picture.
“As this mission is the only way to view a potentially habitable exoplanet in detail, we are already seeing the significant public interest and enthusiasm that could motivate the needed government and private funding,” the researchers write.
Several of the Phase II projects are similarly interesting. One proposes to mine ice-rich lunar soil in permanently dark areas using power collected from permanently bright areas only a few hundred meters up in tall “Sunflower” towers. Another is a concept vehicle for exploring vents on Saturn’s watery moon Enceladus. One we also saw in 2018 aims to offload heavy life support systems onto a sort of buddy robot that would follow astronauts around.
The Phase I projects are a little less consistent: antimatter propulsion, extreme solar sails and others that aren’t so much unrealistic as the science is yet to come on them.
The full list of NIAC awards is here — they make for very interesting reading, even those on the fringe. They’re created by big brains and vetted by experts, after all. | 0.868973 | 3.049759 |
eso1812 — Science Release
Ancient Galaxy Megamergers
ALMA and APEX discover massive conglomerations of forming galaxies in early Universe
25 April 2018
The ALMA and APEX telescopes have peered deep into space — back to the time when the Universe was one tenth of its current age — and witnessed the beginnings of gargantuan cosmic pileups: the impending collisions of young, starburst galaxies. Astronomers thought that these events occurred around three billion years after the Big Bang, so they were surprised when the new observations revealed them happening when the Universe was only half that age! These ancient systems of galaxies are thought to be building the most massive structures in the known Universe: galaxy clusters.
Using the Atacama Large Millimeter/submillimeter Array (ALMA) and the Atacama Pathfinder Experiment (APEX), two international teams of scientists led by Tim Miller from Dalhousie University in Canada and Yale University in the US and Iván Oteo from the University of Edinburgh, United Kingdom, have uncovered startlingly dense concentrations of galaxies that are poised to merge, forming the cores of what will eventually become colossal galaxy clusters.
Peering 90% of the way across the observable Universe, the Miller team observed a galaxy protocluster named SPT2349-56. The light from this object began travelling to us when the Universe was about a tenth of its current age.
The individual galaxies in this dense cosmic pileup are starburst galaxies and the concentration of vigorous star formation in such a compact region makes this by far the most active region ever observed in the young Universe. Thousands of stars are born there every year, compared to just one in our own Milky Way.
The Oteo team discovered a similar megamerger formed by ten dusty star-forming galaxies, nicknamed a “dusty red core” because of its very red colour, by combining observations from ALMA and the APEX.
Iván Oteo explains why these objects are unexpected: “The lifetime of dusty starbursts is thought to be relatively short, because they consume their gas at an extraordinary rate. At any time, in any corner of the Universe, these galaxies are usually in the minority. So, finding numerous dusty starbursts shining at the same time like this is very puzzling, and something that we still need to understand.”
These forming galaxy clusters were first spotted as faint smudges of light, using the South Pole Telescope and the Herschel Space Observatory. Subsequent ALMA and APEX observations showed that they had unusual structure and confirmed that their light originated much earlier than expected — only 1.5 billion years after the Big Bang.
The new high-resolution ALMA observations finally revealed that the two faint glows are not single objects, but are actually composed of fourteen and ten individual massive galaxies respectively, each within a radius comparable to the distance between the Milky Way and the neighbouring Magellanic Clouds.
"These discoveries by ALMA are only the tip of the iceberg. Additional observations with the APEX telescope show that the real number of star-forming galaxies is likely even three times higher. Ongoing observations with the MUSE instrument on ESO’s VLT are also identifying additional galaxies,” comments Carlos De Breuck, ESO astronomer.
Current theoretical and computer models suggest that protoclusters as massive as these should have taken much longer to evolve. By using data from ALMA, with its superior resolution and sensitivity, as input to sophisticated computer simulations, the researchers are able to study cluster formation less than 1.5 billion years after the Big Bang.
"How this assembly of galaxies got so big so fast is a mystery. It wasn’t built up gradually over billions of years, as astronomers might expect. This discovery provides a great opportunity to study how massive galaxies came together to build enormous galaxy clusters," says Tim Miller, a PhD candidate at Yale University and lead author of one of the papers.
This research was presented in two papers, “The Formation of a Massive Galaxy Cluster Core at z = 4.3”, by T. Miller et al., to appear in the journal Nature, and “An Extreme Proto-cluster of Luminous Dusty Starbursts in the Early Universe”, by I. Oteo et al., which appeared in the Astrophysical Journal.
The Miller team is composed of: T. B. Miller (Dalhousie University, Halifax, Canada; Yale University, New Haven, Connecticut, USA), S. C. Chapman (Dalhousie University, Halifax, Canada; Institute of Astronomy, Cambridge, UK), M. Aravena (Universidad Diego Portales, Santiago, Chile), M. L. N. Ashby (Harvard-Smithsonian Center for Astrophysics, Cambridge, Massachusetts, USA), C. C. Hayward (Harvard-Smithsonian Center for Astrophysics, Cambridge, Massachusetts, USA; Center for Computational Astrophysics, Flatiron Institute, New York, New York, USA), J. D. Vieira (University of Illinois, Urbana, Illinois, USA), A. Weiß (Max-Planck-Institut für Radioastronomie, Bonn, Germany), A. Babul (University of Victoria, Victoria, Canada) , M. Béthermin (Aix-Marseille Université, CNRS, LAM, Laboratoire d’Astrophysique de Marseille, Marseille, France), C. M. Bradford (California Institute of Technology, Pasadena, California, USA; Jet Propulsion Laboratory, Pasadena, California, USA), M. Brodwin (University of Missouri, Kansas City, Missouri, USA), J. E. Carlstrom (University of Chicago, Chicago, Illinois USA), Chian-Chou Chen (ESO, Garching, Germany), D. J. M. Cunningham (Dalhousie University, Halifax, Canada; Saint Mary’s University, Halifax, Nova Scotia, Canada), C. De Breuck (ESO, Garching, Germany), A. H. Gonzalez (University of Florida, Gainesville, Florida, USA), T. R. Greve (University College London, Gower Street, London, UK), Y. Hezaveh (Stanford University, Stanford, California, USA), K. Lacaille (Dalhousie University, Halifax, Canada; McMaster University, Hamilton, Canada), K. C. Litke (Steward Observatory, University of Arizona, Tucson, Arizona, USA), J. Ma (University of Florida, Gainesville, Florida, USA), M. Malkan (University of California, Los Angeles, California, USA) , D. P. Marrone (Steward Observatory, University of Arizona, Tucson, Arizona, USA), W. Morningstar (Stanford University, Stanford, California, USA), E. J. Murphy (National Radio Astronomy Observatory, Charlottesville, Virginia, USA), D. Narayanan (University of Florida, Gainesville, Florida, USA), E. Pass (Dalhousie University, Halifax, Canada), University of Waterloo, Waterloo, Canada), R. Perry (Dalhousie University, Halifax, Canada), K. A. Phadke (University of Illinois, Urbana, Illinois, USA), K. M. Rotermund (Dalhousie University, Halifax, Canada), J. Simpson (University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh; Durham University, Durham, UK), J. S. Spilker (Steward Observatory, University of Arizona, Tucson, Arizona, USA), J. Sreevani (University of Illinois, Urbana, Illinois, USA), A. A. Stark (Harvard-Smithsonian Center for Astrophysics, Cambridge, Massachusetts, USA), M. L. Strandet (Max-Planck-Institut für Radioastronomie, Bonn, Germany) and A. L. Strom (Observatories of The Carnegie Institution for Science, Pasadena, California, USA).
The Oteo team is composed of: I. Oteo (Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, UK; ESO, Garching, Germany), R. J. Ivison (ESO, Garching, Germany; Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, UK), L. Dunne (Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, UK; Cardiff University, Cardiff, UK), A. Manilla-Robles (ESO, Garching, Germany; University of Canterbury, Christchurch, New Zealand), S. Maddox (Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, UK; Cardiff University, Cardiff, UK), A. J. R. Lewis (Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, UK), G. de Zotti (INAF-Osservatorio Astronomico di Padova, Padova, Italy), M. Bremer (University of Bristol, Tyndall Avenue, Bristol, UK), D. L. Clements (Imperial College, London, UK), A. Cooray (University of California, Irvine, California, USA), H. Dannerbauer (Instituto de Astrofíısica de Canarias, La Laguna, Tenerife, Spain; Universidad de La Laguna, Dpto. Astrofísica, La Laguna, Tenerife, Spain), S. Eales (Cardiff University, Cardiff, UK), J. Greenslade (Imperial College, London, UK), A. Omont (CNRS, Institut d’Astrophysique de Paris, Paris, France; UPMC Univ. Paris 06, Paris, France), I. Perez–Fournón (University of California, Irvine, California, USA; Instituto de Astrofísica de Canarias, La Laguna, Tenerife, Spain), D. Riechers (Cornell University, Space Sciences Building, Ithaca, New York, USA), D. Scott (University of British Columbia, Vancouver, Canada), P. van der Werf (Leiden Observatory, Leiden University, Leiden, The Netherlands), A. Weiß (Max-Planck-Institut für Radioastronomie, Bonn, Germany) and Z-Y. Zhang (Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, UK; ESO, Garching, Germany).
ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It has 15 Member States: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile and with Australia as a strategic partner. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope and its world-leading Very Large Telescope Interferometer as well as two survey telescopes, VISTA working in the infrared and the visible-light VLT Survey Telescope. ESO is also a major partner in two facilities on Chajnantor, APEX and ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre Extremely Large Telescope, the ELT, which will become “the world’s biggest eye on the sky”.
Max-Planck-Institut für Radioastronomie
Tel: +49 228 525 273
Carlos de Breuck
Tel: +49 89 3200 6613
ESO Public Information Officer
Garching bei München, Germany
Tel: +49 89 3200 6655
Cell: +49 151 1537 3591 | 0.829958 | 3.916031 |
STRANGE objects have been spotted orbiting the supermassive black hole at the centre of the Milky Way.
Astronomers aren't quite sure what they could be so they've used 13 years worth of data to identify them as a new class of object.
Andrea Ghez, co-author of a recent study about the mystery, said: "These objects look like gas and behave like stars."
The six objects are named G1 through to G6.
They appear to be interacting with the black hole known as Sagittarius A* and look like elongated blobs.
Some scientists have theorised that the blobs are gas clouds that each have a mass several times larger than Earth.
Another theory is that they are small stars covered in dust.
Either way, the objects are able to orbit the black hole's edge without getting pulled in and destroyed.
The objects appear compact but stretch out whenever their orbit takes them close to the black hole.
This close period can last from 170 to 1,600 years.
The first two G objects were discovered back in 2005 and 2012.
Because they were able to orbit the black hole without being ripped to shreds, the scientists knew they were looking at objects never seen before.
Ghez has stated that the objects can't just be gas clouds or they would have been dragged into the black hole.
Instead, they must have some sort of object within.
All the objects have different orbits and some are faster than others.
Co-author Mark Morris, UCLA professor of physics and astronomy, said: "One of the things that has gotten everyone excited about the G objects is that the stuff that gets pulled off of them by tidal forces as they sweep by the central black hole must inevitably fall into the black hole.
"When that happens, it might be able to produce an impressive fireworks show since the material eaten by the black hole will heat up and emit copious radiation before it disappears across the event horizon."
The astronomers now think that each G object could be a pair of binary stars that revolved around each other.
They also think these stars could have merged thanks to the gravitational pull of the supermassive black hole.
Ultimately, the onjects could explain how galaxies and black holes evolve.
Ghez said: "It's possible that many of the stars we've been watching and not understanding may be the end product of mergers that are calm now.
"We are learning how galaxies and black holes evolve. The way binary stars interact with each other and with the black hole is very different from how single stars interact with other single stars and with the black hole."
The team now wants to do more analysis to see if they can discover even more unusual objects.
What is a black hole? The key facts
Here's what you need to know…
What is a black hole?
- A black hole is a region of space where absolutely nothing can escape
- That's because they have extremely strong gravitational effects, which means once something goes into a black hole, it can't come back out
- They get their name because even light can't escape once it's been sucked in – which is why a black hole is completely dark
What is an event horizon?
- There has to be a point at which you're so close to a black hole you can't escape
- Otherwise literally everything in the universe would have been sucked into one
- The point at which you can no longer escape from a black hole's gravitational pull is called the event horizon
- The event horizon varies between different black holes, depending on their mass and size
What is a singularity?
- The gravitational singularity is the very centre of a black hole
- It's a one-dimensional point that contains an incredibly large mass in an infinitely small space
- At the singularity, space-time curves infinitely and the gravitational pull is infinitely strong
- Conventional laws of physics stop applying at this point
How are black holes created?
- Most black holes are made when a supergiant star dies
- This happens when stars run out of fuel – like hydrogen – to burn, causing the star to collapse
- When this happens, gravity pulls the centre of the star inwards quickly, and collapses into a tiny ball
- It expands and contracts until one final collapse, causing part of the star to collapse inward thanks to gravity, and the rest of the star to explode outwards
- The remaining central ball is extremely dense, and if it's especially dense, you get a black hole
In other space news, scientists have discovered a new planet orbiting the closest star to our Solar System.
Nasa has announced it will soon stop "hitching rides with Russia" and instead run all-American manned rocket flights starting this year.
And, the space agency is eyeing up a nearby asteroid that contains enough gold to make everyone on Earth a billionaire.
What are your thoughts on the six mystery objects orbiting the black hole? Let us know in the comments..Source: Read Full Article | 0.882802 | 3.796171 |
15 Ongoing Space Missions You Should Know About
Last month, the European Space Agency (ESA) landed a robot on a comet. While the exciting news seemed to come out of nowhere, you can be forgiven for sleeping through the initial launch—it happened in 2004. Scientists and engineers at space agencies around the world play very long games. Rosetta traveled 6.4 billion kilometers before rendezvousing with Comet 67P/Churyumov-Gerasimenko. Even on the starship Enterprise, that’s well over an hour away at warp speed. This raises the question: what else is going on up there? Here are 15 ongoing space missions you might not know about.
Japan Aerospace Exploration Agency (JAXA) launched Akatsuki (“Dawn”), a meteorological satellite, in 2010. It arrived at its destination, Venus, later that year. Space exploration is hard, though, and due to an engine problem, the probe failed to enter Venus’s orbit.
Here’s what happened: On average, it takes about eight minutes for a radio signal to reach Venus from Earth. (Sometimes it’s shorter; sometimes it’s longer. It just depends on where the planets are.) Anything sent such vast distances, then, has to be somewhat self-sufficient. Not only did JAXA have to deal with that delay, but once Akasuki reached the Cloud Planet and began its maneuver into orbit, the probe had to enter a total communications blackout—it was, for a time, on the other side of the planet with no way for signals to reach Earth. Once communications were reestablished, JAXA learned that orbital maneuvers failed, the probe shot past Venus, and the system went into a kind of holding pattern. (Even in their setbacks, space probes are designed to be resilient and cunning.)
The bad news was that physics were no longer on the probe’s side and another try at Venus was impossible; entering orbit is typically a one-shot deal. The good news? Engineers are geniuses. They discovered that while its main engine was shot, its little thrusters were OK—so they put Akatsuki into hibernation mode and a heliocentric orbit (i.e. around the Sun), and the waiting game began. Rather than try to chase down Venus, they decided, why not just let Venus and Akatsuki chase down each other? The two will again line up in late 2015, at which point another attempt at establishing orbit will be made. It’s risky—this is the first time the thrusters have ever been used in such a way. But if it works, humanity’s understanding of the weather and volcanism of our “sister planet” will increase greatly.
NASA launched Juno in 2011 as part of its New Frontiers program. Its mission: to fly to Jupiter and figure out how the planet was formed, what it’s made of, and how its formation affected that of the Solar System. (Actually, any information about Jupiter would be nice. The whole planet is a great big mystery.)
The real story begins 4.6 billion years ago, when a giant nebula suffered a gravitational collapse. The resulting bedlam coalesced to form the Solar System. Jupiter is key to understanding how this happened because it was likely the first planet to form. It is thus made of the same material as that nebula. In other words, Juno is on a scientific odyssey to the origin of the Solar System. If we can figure out Jupiter, we might be able to figure out where we came from. The probe should arrive at Jupiter on July 4, 2016.
NASA, ever faced with budgetary woes from a state devoid of imagination or ambition, was forced to more or less cancel the Dawn mission in 2003, 2005, and 2006. Undaunted, today the orbiter is four months away from Ceres (the largest object in the asteroid belt), having already spent 14 months orbiting Vesta (the second-largest). Dawn was launched into space in 2007 and has since been stacking up “firsts” in space exploration. According to NASA, it’s the first “purely scientific” probe powered by ion thrusters. It’s the first probe to visit Vesta, and thus the first probe to visit a protoplanet. It’s set to be the first to visit Ceres, and if it achieves orbit with that dwarf planet (another first!), will be the first probe to orbit two bodies in a single mission. And it’s the first prolonged mission in the asteroid belt.
Why does the mission matter? During the formation of the Solar System, celestial dust merged into clusters, which merged into rocks, which merged into planets. Vesta and Ceres should have been right there alongside Earth, Venus, Mars, etc., in our sixth-grade light bulb diorama, but they couldn’t quite make the jump to planet-hood. The reason: Jupiter, and its incredible large gravity well. That’s great news for us. These proto-planets—one rocky and the other icy—are more or less windows into the past, and by studying them, we can fill in the blanks on the history and makeup of the Solar System. Dawn will arrive at Ceres in April.
4. New Horizons
Nine years ago, NASA launched space probe New Horizons as part of its New Frontiers program. (New Frontiers, according to NASA, “sends cost effective, mid-sized spacecraft on missions that enhance our understanding of the solar system.” See: Juno, above.) First, a little stellar cartography: if we were to draw a simplified version of the Solar System as a series of concentric rings, it would start with the Sun at the center. Next would be Mercury, Venus, Earth, and Mars, which make up the “inner” or “terrestrial” planets. Moving outward: separating Mars and Jupiter is the asteroid belt (home to proto-planets Pallas, Ceres, and Vesta). Beyond the asteroid belt are Jupiter, Saturn, Uranus, and Neptune, which are collectively known as the “outer planets” (or “gas giants”). The outer planets are really, really big. (Ganymede, for example, one of Jupiter’s moons, is only a bit smaller than Mars. Europa, another of Jupiter’s moons, harbors the best chance of extraterrestrial life in the Solar System. These are really exciting places.) Beyond the outer planets is yet another belt—the Kuiper Belt (of which Pluto is a part)—that consists of bodies called “volatiles,” which are frozen gasses. Beyond the Kuiper Belt is Eris, which was initially called the tenth planet, but is now characterized as a dwarf planet (to the relief of astrologers everywhere). Then we have the Oort Cloud, which is kind of a shell of comets that surrounds the Solar System.
New Horizons launched in 2006 for a date with Pluto, the only planet (well, it was still a planet when we launched it) that we haven’t explored. In 2007, the spacecraft used Jupiter’s gravity to sling it into space with a bit more speed (a “bit more” defined here as an increase of 9000 miles per hour). Because NASA never wastes an opportunity, during this time New Horizons captured four months’ worth of Jupiter imagery and atmospheric data. The probe also crossed paths with asteroid 132524 APL, returning images and composition data.
Next year, the probe will reach Pluto and its moon, Charon. The expected scientific returns are enormous. As Alan Stern of the New Horizons project said in a news conference, “Everything that we know about the Pluto system today could probably fit on one piece of paper.” That’s about to change in a big way. So far, things are looking good. On December 6, 2014, mission control sent orders to the probe to “wake up,” which it promptly did. New Horizons should return some thrilling data—beginning next year, the quality of images it captures will begin to exceed those of the Hubble Space Telescope. Its primary mission will be to determine the geology, chemical composition, and atmospheres of Pluto and Charon. In 2016, it’s on to the Kuiper Belt for further exploration. How long-term is the New Horizons mission? If things go well, the probe might still have power into the 2030s, returning data on Kuiper Belt objects as well as the outer heliosphere.
Historians will one day hail 2014 as a pivotal year in space exploration—the year the European Space Agency landed a robot on a comet. It wasn’t easy—the mission required four gravity assists to reach the comet, including one that took it a perilous 150 miles from the surface of Mars. Once it reached its target, scientists and engineers had to land a tiny probe onto a 2.5-mile-wide comet traveling at 84,000 miles per hour—at a distance of 317 million miles. (For comparison, a bullet only travels 1700 miles per hour.)
The Rosetta mission didn’t end when the Philae probe landed on Comet 67P/Churyumov-Gerasimenko, sent back volumes of data, and went dark. It continues even now. The Rosetta spacecraft is functioning optimally, and has settled into the “comet escort phase” of the operation. It will continue returning images and data of the comet as it approaches the Sun. The closer it gets, the more exciting things will be, as the heated comet will begin releasing frozen gasses and form an atmosphere of sorts around its nucleus. Rosetta will be there, studiously taking notes and collecting samples. It will also be on alert for any signals emanating from the comet’s surface—it’s possible that as the comet approaches the Sun, Philae will wake up and resume sending data for analysis. Not bad for technology that precedes the iPhone by several years.
When we think about space exploration, it’s often challenging to maintain perspective on just how impossible the whole enterprise is. In a way, scientists and engineers are victims of their own success. “What?” the public cries. “Philae didn’t land on the comet like Mary Lou Retton in the 1984 Olympics? We can’t do anything right!” Sometimes it’s important to take a step back, clear your mind, and apply a moment’s thought to what the world’s space agencies are doing.
Cassini is a good place to start. In 1997, a joint NASA-ESA-ASI (Agenzia Spaziale Italiana—Italy’s space agency) spacecraft was launched into space with Saturn as its target. When Saturn and Earth are at their closest, they’re still 750,000,000 miles apart. Part 1 of the mission was to get there, which just shouldn’t be possible for a species that only learned to safely send an object into space 57 years ago. Along the way, the spacecraft took photographs of the Solar System, including the most detailed photo of Jupiter ever captured. (That wasn’t even the mission—it was just something scientists did because the Xbox hadn’t yet been invented and they needed some way to pass the time.) Four years after launch, scientists noticed that the probe’s camera was hazy. They had to work out a way to clean the lens from millions of miles away. They were successful. In October 2003—a year and a half later, and still seven months before the probe would reach Saturn—Cassini went ahead and confirmed Einstein’s Theory of General Relativity.
Cassini arrived in the Saturn system in May 2004 and started collecting data on the planet and its moons. In December, it launched a probe called Huygens, sending it to Titan, one of Saturn’s moons. It arrived at the moon a couple of weeks later, where it safely parachuted to the surface, and returned data and photographs (at a distance of 750,000,000 miles away from Earth). Huygens holds the record for the farthest distance we’ve safely landed a spacecraft.
The mission didn’t end there. Cassini continued collecting data and stunning imagery of Saturn and its moons. In 2005, the spacecraft made a daring run at Enceladus and discovered that the Saturnian moon is venting geysers of water and ice into space. In 2008, Cassini’s mission was extended, and it collected samples from Enceladus’s geysers. In 2010, even though it had logged a total of 2.6 billion miles, Cassini’s mission was again extended because the thing just won’t quit. Through 2017, the spacecraft has hundreds of flybys and orbits planned. In other words, nine years after the craft’s shutdown date, it will still be expanding our understanding of the Solar System.
7. Hayabusa 2
JAXA’s Hayabusa 2 mission has a modest goal: to help determine the origin of life. Last week, Mitsubishi H-IIA rockets shot the probe into space, where it is scheduled to rendezvous with the inelegantly named (162173) 1999 JU3 asteroid in 2018. Here’s the plan: Once Hayabusa 2 reaches the asteroid, it will release three small, hopping sensors to its surface to collect data. It will also release five landing beacons, which the spacecraft will use to touch down on the asteroid and collect a sample. Easy, right? Just wait. Then the craft will lift off and release an “impactor” floating in space. Meanwhile, Hayabusa-2 will fly to the other side of the asteroid. Why? Because the impactor will ignite into a missile and bomb the asteroid. Hayabusa-2 will then fly back to the impact point and collect a new, much deeper sample from the giant hole it created. A deployable camera will capture the whole thing. In 2020, it will return to Earth will a bunch of samples of the asteroid’s surface and insides. The material and data it collects will help scientists continue piecing together what happened 4.6 billion years ago when the Solar System formed.
8. Pioneer 10 & Pioneer 11
To be clear, Pioneer 10 and Pioneer 11 are no longer returning information to Earth, but the probes are still on a mission as interstellar ambassadors. Pioneer 10 was launched in 1972 and sent on a “planetary grand tour.” It was the first spacecraft to pass through the asteroid belt (an astounding achievement—just think about it for a minute) and the first to get close-ups of Jupiter. It measured things like the planet’s magnetosphere (important because Jupiter’s magnetosphere is the largest continuous entity in the Solar System) and it determined that Jupiter is essentially a liquid planet. (These are things that “everybody knows” today, but we only know it because of this probe!) Eleven years after launch, it became the first spacecraft to pass Pluto, and then Neptune, and became the first probe to leave the Solar System. Until its final transmission in 2003, it returned information on solar wind and cosmic rays. Today it continues on a course heading for the star Aldebaran, which it should reach in two million years.
Pioneer 11 was launched in 1973 with the purpose of studying the asteroid belt, which is a pretty harrowing barrier between Earth and the outer planets. Like its big brother, it also studied Jupiter before collecting volumes of data on the Saturn system. NASA lost contact with the probe in 1995. Today it continues its voyage to the constellation Scutum, whose largest star is more or less 44,100,000,000,000,000 miles away.
Though we’re no longer receiving signals from either Pioneer spacecraft, when we talk about long term planning, these probes are not kidding around. At the behest of astrophysicist Carl Sagan, mounted to both probes are plaques, each depicting a man and woman (with an illustration of the spacecraft for scale); a map of the Solar System; our location in the galaxy; and an illustration of hydrogen atoms. In other words, the Pioneer spacecraft are the first interstellar ambassadors of humanity. Should an extraterrestrial species discover the probes, they will know who we are, where we live, and what we know.
9. Voyager 1
Like the Pioneer spacecraft, Voyager 1 was designed, and sent, to study the outer planets. On September 5, 1977, it launched from Cape Canaveral, with a full array of sensors and sophisticated communications equipment on board. Sixteen month later, it began observing the Jovian system. Some of the most famous and recognizable photographs of Jupiter and Saturn came from Voyager 1’s cameras. (Check out this compelling and strangely unnerving video at the Planetary Society.) Among its discoveries are the volcanoes on Io, Jupiter’s moon; the atmospheric composition of Saturn and its wild windstorms below; and the surface diameter of Titan. Voyager 1 then continued on its way toward the outer reaches of the Solar System.
In 1990, Voyager 1 took the first “family portrait” of the Solar System, including the famed “pale blue dot” photograph of Earth. In 2004, Voyager 1, still diligently sending back data, registered “termination shock”—the slowing of solar winds. The following year, scientists concluded that it had entered the heliosheath—a turbulent area where weak solar winds from the Sun meet with interstellar space.
Thirty-three years after its launch, in 2011, scientists decided to test Voyager 1’s maneuverability. After a successful test roll, the craft was oriented so as to better measure solar winds (or the lack thereof). On August 25, 2012, Voyager 1 entered interstellar space, placing it outside of our star system (indeed, any star system)—the first manmade object to do so. In 300 years, it will enter the Oort Cloud. Its sensor equipment will not begin shutting down until 2020, and until the final instrument goes dark (as late as 2030), it will still be registering and returning data on life in the interstellar medium.
10. Voyager 2
Voyager 2 is the identical twin of Voyager 1, and actually launched into space three weeks earlier. (Due to differing trajectories, Voyager 1 would eventually pass Voyager 2 in traveling outward from the Sun.) The probes had similar missions to study the outer planets, though unlike Voyager 1, this probe also visited Neptune and Uranus—the only such probe to ever study those planetary systems. In a way, Voyager 2 is the Captain Cook of space, having discovered 11 of Uranus’s moons. The probe examined Uranus’s axial tilt and magnetosphere, as well as its unusual rings. Later, when it reached Neptune, it discovered the planet’s “Great Dark Spot,” and closely studied Triton, one of Neptune’s moons. In the next few years, it will reach interstellar space. It continues to transmit back to Earth discoveries, data, and observations.
When Kepler launched in 2009, the plan was for it to spend three years studying space for other Earth-like exoplanets in “Goldilocks Zones”: places not too hot, not too cold—hospitable, in other words, to life. (Considering the state of this planet, it’s probably a good idea to have a few backups.) So far, the program has identified 3800 exoplanets and verified 960 of them as Earth-like. According to Space.com, “mission scientists expect more than 90 percent of the mission's candidate planets will turn out to be the real deal.” Kepler even found what astronomers have called a “second Earth.” NASA’s Exoplanet Archive hosts a comprehensive list of the planets identified by Kepler.
After completing its primary mission, two of Kepler’s reaction wheels (necessary for precise orientation) failed, resulting in the need for a new assignment. In 2014, the mission was rechristened K2, and now, in addition to searching out planets, also observes star clusters and supernovae. To compensate for the malfunctioning wheels, K2 positions itself so as to use the sun’s rays to balance it out. In other words, it tilts to a certain angle, and uses the protons bashing into it for balance. (Space.com compares this to balancing a pencil on your finger.) The mission, which even before the malfunction was slated to end in 2012, is funded and expected to remain in operation at least through 2016.
One of the problems with being stuck on this slimy mudhole is that scientists can only see what physics allows them to see. Historically, the only side of the Sun we can watch is the side facing the Earth, and there’s nothing we can do about it. Enjoy whatever angle of the Solar System is visible through your telescope, because that’s all you’re going to get for a while—and forget about looking back at Earth.
The Solar Terrestrial Relations Observatory (STEREO) intends to change that. Launched in 2006, STEREO is comprised of two nearly identical satellites, one of which is ahead of Earth’s orbit, while the other is behind. The result is the first stereoscopic imagery of the Sun. This is enormously beneficial when tracking solar storms—scientists now have three-dimensional views of ongoing events without being confined to Earth-based vantage points. Likewise, scientists can now see what’s happening on the far side of the Sun without relying on inference and extrapolation. That’s total solar visibility, available to them anytime in 3-D. The STEREO observatories also provide previously impossible viewing angles of the Solar System—they can even look back at Earth. The locations of the two observatories can be tracked at any time at NASA’s Stereo Science Center website. The orbits of the STEREO satellites will keep them away from Earth until 2023.
13. Mars Orbiter Mission
In 2013, the Indian Space Research Organization (ISRO) launched the Mars Orbiter Mission (or MOM) and became the fourth space agency to reach the Red Planet. In many ways, the mission is a shakedown and demonstration of everything the Indian Space Research Organization has achieved to date, and one of their goals is to test everything from deep space communication to contingency systems. So far, the mission has been an astonishing success, and a low-cost one at that. At $73 million, MOM is the least expensive Mars mission ever mounted. All of this is thrilling news for anyone who cares about space travel. Science and exploration are cumulative—the more people and probes we have up there, the more we’ll learn and the sooner we’ll see humans leaving footprints in the soil of other worlds. NASA and ISRO have since established a joint working group, and are planning future collaborative missions. MOM is expected to remain in orbit until at least March 2015.
14. Venus Express
The European Space Agency launched Venus Express in 2005 to study—you guessed it—Earth. Well, partially. The probe arrived at Venus in 2006, at which point it entered orbit and began a 500-day study of Venus’s clouds, air, surface—everything, basically. When those 500 days ran out, it began a second mission. And a third. And a fourth. So far, Venus Express has discovered recent volcanic activity; an upper atmospheric layer that’s surprisingly cold for a planet otherwise described as a “red hot furnace”; and ozone activity similar to that of Earth, which helps us understand both planets' atmospheres with greater clarity, and gives us new insight into how climate change works.
Venus Express also had a secondary mission: to study Earth. From Venus’s point of view, Earth is practically a pixel, which is exactly what exoplanets across the galaxy look like from Earth. From the vantage point of Venus, scientists have been studying Earth and trying to figure out if our planet is inhabited. If they can “discover” life on Earth, there’s a much better chance they can use the same techniques to discover life on other planets.
As of today, Venus Express is pretty much out of fuel and awaiting an orbital decay. But because nobody is sure of the exact moment the fuel will run out and the probe will cease to exist, scientists continue collecting data and making plans for future observation and analysis.
15. International Comet Explorer
The International Comet Explorer (ICE) launched in 1978 and looks like every space probe ever drawn in science fiction pulps from the 1950s. Originally called the International Sun/Earth Explorer 3, it was directed to use an array of sensors to study the Earth’s magnetosphere and investigate cosmic rays. Like so many spacecraft, once it achieved its objective, its life was extended and its mission was changed. In 1982, the probe was renamed the International Comet Explorer and directed into a heliocentric orbit. There it was directed to rendezvous with Giacobini-Zinner, a comet first discovered in 1900. In 1985, it crossed into the comet’s tail, gathering data and sending it home for analysis. The following year, it flew through the tail of Comet Halley.
In 1991, ICE was back in its quiet heliocentric orbit and returned to duty studying cosmic rays. By 1997, though 12 of its 13 instruments were still working, the probe was of little use to NASA, who donated it to the Smithsonian Museum. (Yes, the probe was still in space at the time. I’m sure everyone at NASA got a good laugh about that one.)
It took a long time, but the orbits of ICE and Earth finally intersected in 2014. That’s when NASA discovered a problem. We could still understand the signals that ICE was sending Earth, but because of radical changes in technology, we had no way of sending information back to ICE. (This is pretty much the exact plot of Star Trek: The Motion Picture.) As the Goddard Space Center explained, “The transmitters of the Deep Space Network, the hardware to send signals out to the fleet of NASA spacecraft in deep space, no longer includes the equipment needed to talk to ISEE-3. These old-fashioned transmitters were removed in 1999. Could new transmitters be built? Yes, but it would be at a price no one is willing to spend. And we need to use the DSN because no other network of antennas in the US has the sensitivity to detect and transmit signals to the spacecraft at such a distance.”
That, it would seem, was that. (Why can we still talk to Voyager 1, which was launched in 1977, but not ICE, which launched two years later? Because NASA never stopped talking to Voyager.) Interestingly, ICE was never even supposed to resume contact with NASA. When the space agency ended ICE’s mission years earlier, it meant to switch the probe off. It didn’t, thus the 2014 dilemma. And while this wasn’t exactly an Apollo 13-level crisis, it did present an interesting problem.
Enter a group of space enthusiasts and engineers. They decided to make a go of it, and crowd-funded an effort to make contact with the abandoned probe. They engineered a relatively inexpensive radio with open source software, and hooked it up to a satellite dish at the Arecibo Observatory in Puerto Rico. They picked up the probe’s carrier signal, which was a good sign. They then sent telemetry data to the probe. They got no response. After a dramatic pause, however, the probe responded to the request. The team rebooted the probe, and as it continued on its journey, it again began sending reams of scientific data back to Earth. And best of all, the data can be accessed by anyone at "A Spacecraft for All."
In September, the probe’s orbit again took it beyond the reach of Earth communications. If the probe remains in a steady orbit, we will resume contact in 17 years. | 0.925854 | 3.392538 |
Beta Pictoris b orbits the young star Beta Pictoris. This exoplanet is the first to have its rotation rate measured. Image credit: ESO L. Calçada/N. Risinger
The mass of a very young exoplanet has been revealed for the first time using data from the European Space Agency’s (ESA) star mapping spacecraft Gaia and its predecessor, the quarter-century retired Hipparcos satellite.
Astronomers Ignas Snellen and Anthony Brown from Leiden University, the Netherlands, deduced the mass of the planet Beta Pictoris b from the motion of its host star over a long period of time as captured by both Gaia and Hipparcos.
The planet is a gas giant similar to Jupiter but, according to the new estimate, is 9 to 13 times more massive. It orbits the star Beta Pictoris, the second brightest star in the constellation Pictor.
The planet was only discovered in 2008 in images captured by the European Southern Observatory’s Very Large Telescope Chile. Both the planet and the star are only about 20 million years old – roughly 225 times younger than the Solar System. Its young age makes the system intriguing but also difficult to study using conventional methods.
“In the Beta Pictoris system, the planet has ... | 0.873656 | 3.266508 |
In April 2017, West Virginia University celebrated Einstein with a month-long series of lectures, rooftop telescope observing events, art displays, robotics and other activities.
But WVU has had a connection to Einstein long before now that will endure for years to come.
Here are some examples of how Mountaineers are keeping Einstein's legacy alive.
Cracking the cosmos
SEAN MCWILLIAMS (FRONT) PARTICIPATES IN "A SHOUT ACROSS TIME," A DANCED LECTURE ON EINSTEIN'S THEORY OF GENERAL RELATIVITY, BLACK HOLES AND GRAVITATIONAL WAVES.
Sean McWilliams describes a major discovery in physics that Einstein predicted this way:
“At 5:51 a.m. Sept. 14, 2015, something went ‘chirp,’” he wrote.
That “chirp” – from two black holes colliding in space more than a billion light years away – fulfilled the last prediction of Einstein’s general theory of relativity, a framework envisioning that dense objects cause a distortion in spacetime, which is felt as gravity.
McWilliams, mathematics assistant professor Zach Etienne, master's mathematics student Caleb Devine and physics and astronomy doctoral student Belinda Cheeseboro were part of the team — LIGO, short for the Laser Interferometer Gravitational-wave Observatory — that cracked the code by detecting invisible ripples in spacetime. Their observation proves that energy travels in waves across space and time, leaving unique distortions along its path.
Laying the groundwork
McWilliams isn't the first WVU faculty member to follow Einstein’s lead in studying gravitational waves.
Astrophysics professor Maura McLaughlin is chair of the North American Nanohertz Observatory for Gravitational Waves, a National Science Foundation Physics Frontiers Center, which is also on the hunt for gravitational waves.
In 2015, the WVU Center for Gravitational Waves and Cosmology was launched, bringing together researchers from physics and astronomy, mathematics and engineering. McLaughlin directs the center, which collaborates with NASA and faculty at other universities to detect gravitational waves.
Teenage radio wave hunters
One of WVU's most prominent sources of discovery is the world's largest fully steerable radio telescope, nestled in a rural West Virginia town of 143.
Over the years, WVU has invested in the Green Bank Observatory in Green Bank, W.Va. and researchers, including Maura McLaughlin and Duncan Lorimer, have used data from the observatory as they hunt radio waves.
The observatory also provides data to Einsteins-in-training, high school students across the U.S. who have discovered pulsars by sifting through data from the Green Bank Telescope. The project is a collaboration of the Green Bank Observatory and WVU with funding from the National Science Foundation.
Messages from Anonymous
Illustration of Duncan Lorimer
In 2007, astrophysics professor Duncan Lorimer and undergraduate David Narkevic happened upon a signal that is now known as the first fast radio burst.
Scientists have discovered more of them in the intervening years, but no one’s really sure what they are. There’s been speculation that it could be coming from aliens, of course. We do know that, unlike pulsar stars that emit powerful radio bursts at regular intervals, the fast radio bursts are not regularly occurring.
“It could turn out that they’re really rather mundane, and we might not be using them for very much,” Lorimer says with a wry laugh. “Right now it’s an interesting race just to figure out what they are and worry about it after that.”
Read the full story in our archives.
Einstein and the arts
Hey, who said Einstein is all about hard science and complex formulas?
To complement WVU's Celebrating Einstein event, artists, dancers and musicians bid tribute to Einstein's work in their own talented way.
Morgantown-area ceramic artist Sarah Guerry made black hole bowls and gravitational wave detection mugs for the event, while the WVU School of Theatre and Dance helped present a "danced" lecture on Einstein's theory of general relativity, black holes and gravitational waves.
A BLACK HOLE BOWL
Kathryn Williamson, teaching assistant professor of physics and astronomy and manager of the WVU Planetarium who also organized the Celebrating Einstein event, even showcased some of her own artwork depicting “spacetime.” When you're in town, make reservations to see a planetarium show. Coming soon will be a show on “Einstein's Gravity Playlist” that will focus on Einstein's theory of relativity that predicted gravitational waves. | 0.834672 | 3.577802 |
A planetary system hiding out in the constellation Dorado, some 100 light-years from Earth, hosts an Earth-size planet orbiting within its star’s habitable zone.
A few months ago, a group of NASA exoplanet astronomers, who are in the business of discovering planets around other stars, called me into a secret meeting to tell me about a planet that had captured their interest. Because my expertise lies in modeling the climate of exoplanets, they asked me to figure out whether this new planet was habitable — a place where liquid water might exist.
These NASA colleagues, Josh Schlieder and his students Emily Gilbert, Tom Barclay and Elisa Quintana, had been studying data from TESS (Transiting Exoplanet Survey Satellite) when they discovered what may be TESS’ first known Earth-sized planet in a zone where liquid water could exist on the surface of a terrestrial planet. This is very exciting news because this new planet is relatively close to Earth, and it may be possible to observe its atmosphere with either the James Webb Space Telescope or ground-based large telescopes.
Habitable zone planets
The host star of the planet that Gilbert’s team discovered is called TESS of Interest number 700, or TOI-700. Compared to the Sun, it is a small, dim star. It is 40% the size, only about 1/50 of the Sun’s brightness and is located about 100 light-years from Earth in the constellation Dorado, which is visible from our Southern Hemisphere. For comparison, the nearest star to us, Proxima Centauri, is 4.2 light-years away from Earth. To get a sense of these distances, if you were to travel on the fastest spacecraft (Parker Solar Probe) to reach Proxima Centauri, it would take nearly 20,000 years.
There are three planets around TOI-700: b, c and d. Planet d is Earth-size, within the star’s habitable zone and orbits TOI-700 every 37 days. My colleagues wanted me to create a climate model for Planet d using the known properties of the star and planet. Planets b and c are Earth-size and mini-Neptune-size, respectively. However, they orbit much closer to their host star, receiving 5 times and 2.6 times the starlight that our own Earth receives from the Sun. For comparison, Venus, a dry and hellishly hot world with surface temperature of approximately 860 degrees Fahrenheit, receives twice the sunlight of Earth.
Until about a decade ago, only two habitable zone planets of any size were known to astronomers: Earth and Mars. Within the last decade, however, thanks to discoveries made through both ground-based telescopes and the Kepler mission (which also looked for exoplanets from 2009 to 2019, but is now retired), astronomers have discovered about a dozen terrestrial-sized exoplanets. These are between half and two times larger than the Earth within the habitable zones of their host stars.
Despite the relatively large number of small exoplanet discoveries to date, the majority of stars are between 600 to 3,000 light-years away from Earth — too far and dim for detailed follow-up observation.
Why is liquid water important for habitability?
Unlike Kepler, TESS’ mission is to search for planets around the Sun’s nearest neighbors: those bright enough for follow-up observations.
Between April 2018 and now, TESS discovered more than 1,500 planet candidates. Most are more than twice the size of Earth with orbits of less than 10 days. Earth, of course, takes 365 days to orbit around our Sun. As a result, the planets receive significantly more heat than Earth receives from the Sun and are too hot for liquid water to exist on the surface.
Liquid water is essential for habitability. It provides a medium for chemicals to interact with each other. While it is possible for exotic life to exist at higher pressures, or hotter temperatures — like the extremophiles found near hydro-thermal vents or the microbes found half a mile beneath the West Antarctic ice sheet — those discoveries were possible because humans were able to directly probe those extreme environments. They would not have been detectable from space.
When it comes to finding life, or even habitable conditions, beyond our solar system, humans depend entirely upon remote observations. Surface liquid water may create habitable conditions that can potentially promote life. These life forms can then interact with the atmosphere above, creating remotely detectable bio-signatures that Earth-based telescopes can detect. These bio-signatures could be current Earth-like gas compositions (oxygen, ozone, methane, carbon dioxide and water vapor), or the composition of ancient Earth 2.7 billion years ago (mostly methane and carbon dioxide, and no oxygen).
We know one such planet where this has already happened: Earth. Therefore, astronomers’ goal is to find those planets that are about Earth-size, orbiting at those distances from the star where water could exist in liquid form on the surface. These planets will be our primary targets to hunt for habitable worlds and signatures of life outside our solar system.
Possible climates for planet TOI-700 d
To prove that TOI-700 d is real, Gilbert’s team needed to confirm using data from a different type of telescope. TESS detects planets when they cross in front of the star, causing a dip in the starlight. However, such dips could also be created by other sources, such as spurious instrumental noise or binary stars in the background eclipsing each other, creating false positive signals. Independent observations came from Joey Rodriguez at Center for Astrophysics at Harvard University. Rodriguez and his team confirmed the TESS detection of TOI-700 d with the Spitzer telescope, and removed any remaining doubt that it is a genuine planet.
My student Gabrielle Engelmann-Suissa and I used our modeling software to figure out what type of climate might exist on planet TOI-700 d. Because we do not yet know what kind of gases this planet may actually have in its atmosphere, we use our climate models to explore possible gas combinations that would support liquid oceans on its surface. Engelmann-Suissa, with the help of my longtime collaborator Eric Wolf, tested various scenarios including the current Earth atmosphere (77% nitrogen, 21% oxygen, remaining methane and carbon dioxide), the composition of Earth’s atmosphere 2.7 billion years ago (mostly methane and carbon dioxide) and even a Martian atmosphere (a lot of carbon dioxide) as it possibly existed 3.5 billion years ago.
Based on our models, we found that if the atmosphere of planet TOI-700 d contains a combination of methane or carbon dioxide or water vapor, the planet could be habitable. Now our team needs to confirm these hypotheses with the James Webb Space Telescope.
Strange new worlds and their climates
The climate simulations our NASA team has completed suggest that an Earth-like atmosphere and gas pressure isn’t adequate to support liquid water on its surface. If we put the same quantity of greenhouse gases as we have on Earth on TOI-700 d, the surface temperature on this planet would still be below freezing.
Our own atmosphere supports a liquid ocean on Earth now because our star is quite big and brighter than TOI-700. One thing is for sure: All of our teams’ modeling indicates that the climates of planets around small and dim stars like TOI-700 are very unlike what we see on our Earth.
The field of exoplanets is now in a transitional era from discovering them to characterizing their atmospheres. In the history of astronomy, new techniques enable new observations of the universe including surprises like the discovery of hot-Jupiters and mini-Neptunes, which have no equivalent in our solar system. The stage is now set to observe the atmospheres of these planets to see which ones have conditions that support life.
[ You’re smart and curious about the world. So are The Conversation’s authors and editors. You can get our highlights each weekend. ]
This article was originally published at The Conversation. The publication contributed the article to Live Science’s Expert Voices: Op-Ed & Insights. | 0.932241 | 3.892411 |
using “Ice Age” as a control, Google N-Gram style:
A new paper just published in Nature has made a bit of a stir because it has been interpreted as suggesting that global warming has the benefit of avoidance of an Ice Age that was just about to happen. However, the paper does not actually say that, and we already knew that we may have avoided the next ice age, possibly by human activities dating back to the 19th century or before. Also, the paper actually addresses a different question, an important one, but one that may be a bit esoteric for may interested parties.
First, the esoteric question. Simply put, over the last two million years or so, the Earth has gone through a couple of dozen cycles that have ice ages at one end and very warm periods (such as the one we were in in the 19th century) at the other end. The first several cycles were modest, but the most recent have been extreme, with the cold periods involving the growth of major continental glaciers big enough, for example, to cover most of Canada and a chunk of the US. The current warm period, enhanced by anthropogenic global warming, is probably already warmer than the previous really warm periods, and over the next couple of decades will certainly be what has been called a “super-interglacial” with temperatures consistently being above anything during this entire glacial-interglacial cycle.
This cycling of climate is linked to a cycle of how much of the Sun’s energy falls on the earth, when, and where. The simple version of this arises from the fact that land masses, where continental glaciers can form, are concentrated in the Northern Hemisphere. Continental glaciers have their own cooling system (by being bright and reflecting away sunlight, mainly) so once they form they tend to be self sustaining. But it is difficult for then to form to begin with because, well, the Earth is usually too warm. But, if Northern Hemisphere summers are chilled down sufficiently several years in a row, these glaciers can start to form, and this can be part of the onset of a new glacier.
Current and recommended books on climate change.
The Earth wobbles as it rotates. The elliptical orbit of the Earth around the Sun varies in how elongated it is. The location of the Earth on this elliptical orbit during a particular moment in the seasonal round changes over time (so every now and then the solstice, for instance, happens when the Earth is maximally distant from the sun). These three factors change in a regular cycle over different time periods. Every now and then all three factors cause the following thing to be true: late June, the longest and thus sunniest period of the year in the Northern Hemisphere, is also the time when the Earth is farthest from the sun on an elliptical orbit that is as elongated as it ever gets, but the Earth has wobbled up so that the Northern Hemisphere is not as pointed towards the Sun as it could be. When this happens, Northern Hemisphere summers have a minimal amount of the Sun’s energy.
But that difference is probably not enough to start an ice age, and the opposite times, when the Northern Hemisphere’s summers are maximally sunlight, are probably not enough warmer than other periods to kill off an ice age.
During the 1970s and early 80s, the cycles of Sun’s energy variation caused by these orbital quirks were reconsidered (it was a 19th century observation) and correlated with recently obtained isotope data from sea cores indicating glacial cycles. They matched. More NH summer extra sunlight happened during interglacials pinned down by the isotopic data, and NH summer reduced sunlight matched in time with the glacial periods. But, over subsequent years, research tended to show that the changes in sunlight and glacial activity did not correlate exactly. Rather, other causes of the onset or melting of glaciers seemed to be other things.
Over time we have come to realize that the orbital effects, known as Milankovitch Cycles, probably determine the potential for the Earth to be in a glacial period vs. an interglacial period, but other factors actually push the climate system into these new states.
This is like so many other things in nature. You have the right genes to develop perfect pitch, but that does not make you a musician. Growing up in an environment that would encourage one to be a musician is not sufficient to make you a great musician. Having perfect pitch and a music-friendly environment and a few other things, all together in for the same person, might create a David Bowie. Or not. But given millions of people, there will be hundreds of great musicians, and most of them will have most of these factors in place.
The current research is a study that relates atmospheric CO2 changes and Milankovitch changes, and it may be an important contribution to understanding this complex system. I’ve not thought about the paper enough to say this (or not say it), but that is what the paper is about.
Meanwhile, years ago, back in the late 1960s and through the 1970s, paleoclimate experts like John Imbrie and JM Mitchell and others pointed out that greenhouse gasses would likely bring on a “super interglacial” that would obviate an ice age that might otherwise occur very soon. They also noted that after thousands of years following the burning of the last available fossil fuel, or the curtailment of this insane practice, the CO2 added to the atmosphere would likely cycle back into solid form, and the next time orbital geometry matched up with other stuff, we could have our ice age again.
More recently, Bill Ruddiman looked at human activities in recent history and suggested that land clearing practices associated with agriculture, and the early burning of fossil fuels, was sufficient to put off an ice age.
Today we know that the cycling in and out of Ice Ages over the last million years or so is associated with atmospheric CO2 levels well within the range of 200ppm to 300ppm. So, I would guess that once we passed around 300ppm we left the likelihood of an ice age behind. Indeed, it is possible that had we not done that, we might have eventually figured out that we should do it, to avoid an ice age.
But enough is enough. The fact that you like your hamburger cooked does not mean that therefore you should cook it at 10,000 degrees C for a year. You cook it the right amount. More than that ruins it. We might benefit from “cooking” the Earth just a little bit to avoid an ice age (and yes, we do want to avoid an ice age), but we don’t want to overcook the Earth. We passed annual an average CO2 concentration of 400ppm a few months ago. The hamburger, and our goose, is being overcooked.
One outcome of the new research is to suggest that without human perturbation of the climate, we would have skipped this ice age anyway. This assertion is the reason I’m reserving judgement on this paper. I wonder if all the appropriate factors have been taken into account, because I find this assertion difficult to believe. But, I’m not going to make an argument based on incredulity. I’ll just note my incredulity, as someone who has studied Pleistocene climate change, and consider getting back to you on this at a later time.
The paper further suggests that current burning of CO2 will extend that period of time to the next Ice age by double, and that “Our simulations demonstrate that under natural conditions alone the Earth system would be expected to remain in the present delicately balanced interglacial climate state, steering clear of both large-scale glaciation of the Northern Hemisphere and its complete deglaciation, for an unusually long time.”
So, when media report that this study suggests that anthropogenic global warming has put off an ice age, they are talking about shifting a 50,000 year delay to the next ice age (without human effects) to a 100,000 year delay. This would be a new idea, because we were thinking that we had put off an ice age that was just about to happen (over the next centuries). So, the paper actually says nearly the exact opposite of what the press says it says. How could this happen? Can’t imagine…
Ganopolski, A. R. Winkelmann,& H. J. Schellnhuber. 2016. Critical insolation–CO2 relation for diagnosing past and future glacial inception. Nature 529, 200–203 (14 January 2016) doi:10.1038/nature16494.
This is a guest post by David Kirtley. David originally posted this as a Google Doc, and I’m reproducing his work here with his permission. Just the other day I was speaking to a climate change skeptic who made mention of an old Time or Newsweek (he was not sure) article that talked about fears of a coming ice age. There were in fact a number of articles back in the 1970s that discussed the whole Ice Age problem, and I’m not sure what my friend was referring to. But here, David Kirtley places a recent meme that seems to be an attempt to diffuse concern about global warming because we used to be worried about global cooling. The meme, however, is not what it seems to be. And, David places the argument that Ice Age Fears were important and somehow obviate the science in context.
h3>The 1970s Ice Age Myth and Time Magazine Covers
– by David Kirtley
A few days ago a facebook friend of mine posted the following image:
From the 1977 cover we can see that apparently a new ice age was supposed to arrive. Only 30 years later, according to the 2006 cover, global warming is supposed to be the problem. But the cover on the left isn’t from 1977. It actually is this Time cover from April 9, 2007:
As you can see, the cover title has nothing to do with an imminent ice age, it’s about global warming, as we might expect from a 2007 Time magazine.
The faked image illustrates one of the fake-skeptics’ favorite myths: The 1970s Ice Age Scare. It goes something like this:
- In the 1970s the scientists were all predicting global cooling and a future ice age.
- The media served as the scientists’ lapdog parroting the alarming news.
- The ice age never came—the scientists were dead wrong.
- Now those same scientists are predicting global warming (or is it “climate change” now?)
The entire purpose of this myth is to suggest that scientists can’t be trusted, that they will say/claim/predict whatever to get their names in the newspapers, and that the media falls for it all the time. They were wrong about ice ages in the 1970s, they are wrong now about global warming.
But why fake the 1977 cover? Since, according to the fake-skeptics, there was so much news coverage of the imminent ice age why not just use a real 1970s cover?
I searched around on Time’s website and looked through all of the covers from the 1970s. I was shocked (shocked!) to find not a single cover with the promise of an in-depth, special report on the Coming Ice Age. What about this cover from December 1973 with Archie Bunker shivering in his chair entitled “The Big Freeze”? Nope, that’s about the Energy Crisis. Maybe this cover from January 1977, again entitled “The Big Freeze”? Nope, that’s about the weather. How about this one from December 1979, “The Cooling of America”? Again with the Energy Crisis.
Check out: Ubuntu and Linux Books
Now, there really were news articles in the 1970s about scientists predicting a coming ice age. Time had a piece called “Another Ice Age?” in 1974. Time’s competition, Newsweek, joined in with “The Cooling World” in 1975. People have collected lists and lists of “Coming Ice Age” stories from newspapers, magazines, books, tv shows, etc. throughout the 1970s.
But if it was such a big news story why did it never make the cover of America’s flagship news magazine like the faked image implies? Perhaps there is more to the story.
In the 1970s there were a few developments in climate science:
- Scientists were finding answers to the puzzle of what caused ice ages in the past: variations in earth’s orbit.
- Scientists were gathering data from around the world to come up with global average temperatures, and they found that temperatures had been cooling since about the 1940s.
- Scientists were realizing that some of this cooling was due to increasing air pollution (soot and aerosols, tiny particles suspended in the air) which was decreasing the amount of solar energy entering the atmosphere.
- Scientists were also quantifying the “greenhouse effect” of another part of our increasing pollution: carbon dioxide (CO2), which should cause the climate to warm.
The realization that very long cycles in earth’s orbit could cause the waxing and waning of ice ages, coupled with the fact that our soot and aerosols were already causing cooling, led some scientists to conclude that we may be headed for another ice age. Exactly when was still a little unclear. However, the warming effects of CO2 had been known for over a century, and new research in the 1970s was showing that CO2 warming would more than compensate for the cooling caused by aerosols, resulting in net warming.
Check out: Books on programming, especially for kids
This, in a very brief nutshell, was the state of climate science in the 1970s. And so the media of the time published many stories about a coming ice age, which made for timely reading during some very cold winters. But many news stories also mentioned that other important detail about CO2: that our climate might soon change due to global warming. In 1976 Time published “The World’s Climate: Unpredictable” which is a very good summary of the then current scientific thinking: some scientists emphasized aerosols and cooling, some scientists emphasized CO2 and warming. There was no consensus either way. Many other 1970s articles which mention a Coming Ice Age also mention the possibility of increased warming due to CO2. For instance, here, here and here.
Fake-skeptics read these stories and only focus on the Coming Ice Age angle, and they enlarge the importance of those scientists who focused on that angle. They totally ignore the rest of the picture of 1970s climate science: that increasing CO2 would cause global warming.
The purpose of the image of the two Time magazine covers, and of the Coming Ice Age Myth, is not to show the real history of climate science, but to obscure that history and to cause confusion. It seems to be working. Because today, when there really is a consensus about climate science and 97% of climatologists agree that adding CO2 to the atmosphere is leading to climate change, only 45% of the public know about that consensus. The other 55% must think we’re still in the 1970s when scientists were still debating the issue. Seems newsworthy to me, maybe Time will run another cover story on it.
To learn more see:
- The Discovery of Global Warming: Revised and Expanded Edition (New Histories of Science, Technology, and Medicine), Spencer R. Weart. The author has an online expanded version of this book.
- Ice Ages: Solving the Mystery, John Imbrie and Katherine Palmer Imbrie.
- “The Myth of the 1970s Global Cooling Scientific Consensus,” Peterson, Thomas C., William M. Connolley, John Fleck, 2008: Bull. Amer. Meteor. Soc., 89, 1325–1337.
A very large percentage of the earth’s land masses were covered by glacial ice during the last glaciation. Right now it is about 10%, but during the Ice Age it was much more. Enough of the earth’s water was trapped in this glacial ice that the oceans were about 120 to 150 meters lower than they are now. The thicker ice sheets were one or two kilometers thick, and they tended to slide around quite a bit, grinding down the surface of the earth and turning bedrock into dust and cobbles. | 0.819285 | 3.474077 |
As Comet 2011 L4 PanSTARRS moves out of the inner solar system, we’ve got another comet coming into view this month for northern hemisphere observers.
Comet C/2012 F6 Lemmon is set to become a binocular object low to the southeast at dawn for low northern latitudes in the first week of April. And no, this isn’t an April Fools’ Day hoax, despite the comet’s name. Comet Lemmon (with two m’s) was discovered by the Mount Lemmon Sky Survey (MLSS) based outside of Tucson, Arizona on March 23, 2012. MLSS is part of the Catalina Sky Survey which searches for Near Earth Asteroids. We’ve got another comet coming into view this month for northern hemisphere observers as Comet 2011 L4 PanSTARRS moves out of the inner solar system.
The comet is on an extremely long elliptical orbit, with a period of over 11,000 years. Comet Lemmon just passed perihelion at 0.74 astronomical units from the Sun on March 24th.
Southern hemisphere observers have been getting some great views of Comet Lemmon since the beginning of this year. It passed only three degrees from the south celestial pole on February 5th, and since that time has been racing up the “0 Hour” line in right ascension. If that location sounds familiar, that’s because another notable comet, 2011 L4 PanSTARRS has been doing the same. In fact, astrophotographers in the southern hemisphere were able to catch both comets in the same field of view last month.
Another celestial body occupies 0 Hour neighborhood this time of year. The Sun just passed the vernal equinox marking the start of Spring in the northern hemisphere and Fall in the southern on March 19th.
And like PanSTARRS, Comet Lemmon has a very steep orbit inclined 82.6° relative to the ecliptic.
Comet Lemmon broke naked-eye visibility reaching +6th magnitude in late February and has thus far closely matched expectations. Current reports place it at magnitude +4 to +5 as it crosses northward through the constellation Cetus. Predictions place the maximum post-perihelion brightness between magnitudes +3 and +5 in early April, and thus far, Comet Lemmon seems to be performing right down the middle of this range.
Southern observers have caught a diffuse greenish 30” in diameter nucleus on time exposures accompanied by a short, spikey tail. Keep in mind, the quoted brightness of a comet is extended over its entire surface area. Thus, while a +4th magnitude star may be easily visible in the dawn, a 3rd or even 2nd magnitude comet may be invisible to the unaided eye. Anyone who attempted to spot Comet PanSTARRS in the dusk last month knows how notoriously fickle it actually was. Binoculars are your friend in this endeavor. Begin slowly sweeping the southeast horizon about an hour before local sunrise looking for a fuzzy “star” that refuses to reach focus. Comet Lemmon will get progressively easier in the dawn sky for latitudes successively farther north as the month of April progresses.
Comet Lemmon will continue to gain elevation as it crosses from Cetus into the constellation Pisces on April 13th. An interesting grouping occurs as the planet Mercury passes only a few degrees from the comet from April 15th to April 17th. Having just past greatest elongation on March 31st, Mercury will shine at magnitude -0.1 and make a good guide to locate the comet in brightening dawn skies. The pair is joined by the waning crescent Moon on the mornings of April 7th and 8th which may also provide for the first sighting opportunities from low north latitudes around these dates.
The Moon reaches New phase on Wednesday, April 10th at 5:35AM EDT/9:35 UT. It will be out of the morning sky for the next couple of weeks until it reaches Full on April 25th, at which point it will undergo the first eclipse of 2013, a very shallow partial. (More on that later this month!)
Comet Lemmon will then slide across the celestial equator on April 20th and cross the plane of the ecliptic on April 22nd as it heads up into the constellation Andromeda in mid-May. We’re expecting Comet Lemmon to be a fine binocular object for late April, but perhaps not as widely observed due to its morning position as PanSTARRS was in the dusk.
By mid-May, Comet Lemmon will have dipped back down below +6th magnitude and faded out of interest to all but a few deep sky enthusiasts. Comet Lemmon will pass within 10° of the north celestial pole on August 9th, headed back out into the icy depths of the solar system not to return for another 11,000-odd years.
It’s interesting to see how these two springtime comets will effect observers expectations for the passage of Comet C/2012 S1 ISON. Will this in fact be the touted “Comet of the Century?” Much hinges on whether ISON survives its November 28th perihelion only 1,166,000 kilometers from the center of our Sun (that’s 0.68 solar-radii or about 3 times the Earth-Moon distance from the surface of the Sun). If so, we could be in for a fine “Christmas Comet” rivaling the passage of Comet Lovejoy in late 2011. On the other hand, a disintegration of Comet ISON would be more akin to the fizzle of Comet Elenin earlier in 2011. | 0.829001 | 3.64156 |
International astronomers based in Italy have had their eyes on a distant star cluster discovered more than four decades ago dubbed Terzan-5, and what they’ve found is pretty interesting.
Image Credit: NASA/Getty Images
There are reportedly two distinct types of stars in this cluster, which scientists have aged at around 7 billion years apart from one another. This suggests that while some of the stars are relatively young, others are fossils from the early Milky Way that are still kicking today.
The findings have been pushed into review for The Astrophysical Journal, but you can view the paper online right now.
The star cluster system is located around 19,000 light years away from ours, but using data accumulated from the ESO Very Large Telescope, Hubble Space Telescope, and from various Earth-based telescopes, scientists have determined that Terzan-5 is very different from other globular clusters we know of in our galaxy.
Because there seems to be a 7-billion-year gap in between the formation of some of these stars, it would mean there was a pause in between the formation of the stars in the system before star-formation activity started up again.
“This requires the Terzan 5 ancestor to have large amounts of gas for a second generation of stars and to be quite massive. At least 100 million times the mass of the Sun,” explains Davide Massari, co-author of the study.
Because there are still possibly some remnants of the early Milky Way lurking in this cluster, you can imagine astronomers are excited to learn more about what’s going on here. After all, it provides a unique opportunity to see more about what the early Milky Way might have been like.
“We think that some remnants of these gaseous clumps could remain relatively undisrupted and keep existing embedded within the galaxy,” explains Francesco Ferraro from the University of Bologna, Italy, and lead author of the study. “Such galactic fossils allow astronomers to reconstruct an important piece of the history of our Milky Way.”
With what we now know, Terzan-5 may be a primordial building block of our galaxy, and could help us to better understand how our galaxy formed.
Source: Hubble Space Telescope | 0.835934 | 3.598413 |
Title: A Thirty Kiloparsec Chain of “Beads on a String” Star Formation Between Two Merging Early Type Galaxies in the Core of a Strong-Lensing Galaxy Cluster
Authors: Grant R. Tremblay, Michael D. Gladders, Stefi A. Baum, Christopher P. O’Dea, Matthew B. Bayliss, Kevin C. Cooke, Håkon Dahle, Timothy A. Davis, Michael Florian, Jane R. Rigby, Keren Sharon, Emmaris Soto, Eva Wuyts
First Author’s Institution: European Southern Observatory, Germany
Paper Status: Accepted for Publication in ApJ Letters
Take a look at all that gorgeous science in Figure 1! No really, look: that’s a lot of science in one image. Okay, what is it you’re looking at? First, those arcs labeled in the image on the left are galaxies at high redshift being gravitationally lensed by the cluster in the middle (which has the wonderful name SDSS J1531+3414). Very briefly, gravitational lensing is when a massive object (like a galaxy cluster) bends the light of a background object (like these high redshift galaxies), fortuitously focusing the light towards the observer. It’s a chance geometric alignment that lets us learn about distant, high-redshift objects. The lensing was the impetus for these observations, taken by Hubble’s Wide Field Camera 3 (WFC3) in four different filters across the near ultraviolet (NUV, shown in blue), optical (two filters, shown in green and orange), and near infrared (yellow). But what fascinated the authors of this paper is something entirely different happening around that central cluster. The image on the right is a close-up of the cluster with no lensing involved at all. The cluster is actually two elliptical galaxies in the process of merging together, accompanied by a chain of bright NUV emission. NUV emission is associated with ongoing star formation, which is rarely seen in elliptical galaxies (ellipticals are old, well evolved galaxies, which means they’re made mostly of older stellar populations and lack significant star formation; they’re often called “red and dead” for this reason). Star formation is however expected around merging galaxies (even ellipticals) as gas gets stirred up, and the striking “beads on a string” morphology is often seen in spiral galaxy arms and arms stretching between interacting galaxies. But the “beads” shape is hard to explain here, mostly because of the orientation (look how it’s not actually between the galaxies, but off to one side) and the fact that this is possibly the first time it has been observed around giant elliptical galaxies.
So what’s going on in this cluster? First, the authors made sure the central two galaxies are actually interacting, and that the star formation is also related. It’s always important to remember that just because two objects appear close together in an image doesn’t necessarily mean they’re close enough to interact. Space is three dimensional, while images show us only 2D representations. Luckily, these targets all have spectroscopy from the Sloan Digital Sky Survey (SDSS), which measures a few different absorption lines and gives the same redshift for all of the components: the two interacting galaxies, and the star formation regions (see Figure 2). Furthermore the authors have follow-up spectroscopy from the Nordic Optical Telescope (NOT), which confirms the SDSS results. So they’re definitely all part of one big, interacting system.
Hα (the 3-2 transition of hydrogen) indicates ongoing star formation, so the authors measure the Hα luminosity of the NUV-bright regions to calculate a star formation rate (SFR). Extinction due to dust and various assumptions underlying the calculation mean the exact SFR is difficult to pin down, but should be between ~5-10 solar masses per year. From that number, it’s possible to estimate the molecular gas mass in the regions. This estimate basically says that if you know how fast stars are produced (the SFR), then you know roughly how much fuel is around (fuel being the cold gas). This number turns out to be about 0.5-2.0 × 1010 solar masses. The authors tried to verify this observationally by observing the CO(1-0) transition (a tracer of cold molecular gas), but received a null detection. That’s okay, as this still puts an upper limit on the gas of 1.0 × 1010 solar masses, which is both within their uncertainties and a reasonable amount of cold gas, given the mass of the central galaxy pair (but for more information on gas in elliptical galaxies, see this astrobite!).
The point is that there’s definitely a lot of star formation happening happening around these galaxies, and while star formation is expected around mergers, it’s not clear that this particular pattern of star formation has ever been seen around giant ellipticals before. The authors suggest that’s because this is a short-lived phenomenon, and encourage more observations. Specifically, they point out that Gemini GMOS observations already taken will answer questions about gas kinematics, that ALMA has the resolution to ascertain SFRs and molecular gas masses for the individual “beads” of star formation, and that Chandra could answer questions about why the star formation is happening off-center from the interacting galaxies. If the gas is condensing because it’s been shocked, that will show up in X-ray observations, but it would be expected between the galaxies, not off to the side as in this case. Maybe some viscous drag is causing a separation between the gas and the stars? There’s clearly a lot to learn from this system, so keep an eye out for follow-up work. | 0.907812 | 4.007627 |
Today’s paper presents new observations of the central star of the Kepler-11 system, which, despite having a planetary system utterly unlike the Solar system, is nearly identical to the Sun.
The bending of light by gravity produces many phenomena, which can be exploited to make otherwise impossible observations. Chief among these is microlensing, where the light from a distant star or galaxy can be magnified by another object in between it and the observer. Initially used to image distant galaxy clusters, it can also reveal the presence of otherwise undetectable planets.es
How do planets meet their ends? For many of the smallest worlds, it maybe as a debris disc strewn around the tiny white dwarf that is all that is left of their stars. The faint infrared glow from nearly forty such discs have been discovered, their rocky origins given away by the chemical composition of the material falling onto the parent white dwarf. Today’s paper adds another disc to the sample, although not without difficulty.
Nearly a year ago, Astrobites reported on an unexpected finding from the Kepler spacecraft: A pair of white dwarfs that were “outbursting”, becoming as much as 20 percent brighter every few days before quieting down again. Today’s paper adds another two outbursting white dwarfs, and begins to explore the reason for this hitherto unobserved behavior.
The James Webb Space Telescope will be the largest space observatory built to date. The authors of today’s paper suggest one possible use for the giant new telescope: Searching for signs of life on other planets. | 0.903299 | 3.59049 |
Scientists from the University of Cape Town in South Africa have recently released a report detailing the discovery of three monstrous black holes clumped together in a distant galaxy. Discovery of these black holes suggests that closely-packed groups of black holes are much more common than we previously thought.
Black holes are regions in space with such high gravity that nothing, not even light, can escape from them. Supermassive black holes are the largest type and their masses can be millions to billions times greater than the sun.
Most galaxies, perhaps all galaxies, are thought to contain at least one supermassive black hole at their centers. Our own Milky Way galaxy is believed to have a supermassive black hole 26,000 light-years away from our Solar System. However, these new findings provide evidence that galaxies can contain multiple black holes.
Galaxies are capable of evolving by merging together. When two galaxies draw close together, their gravities can cause the two to merge and become one large galaxy. Since each galaxy has its own supermassive black hole at its center, it would be reasonable to expect that the resulting merged galaxy winds up with two black holes. However, finding pairs of black holes is much harder in practice than in theory. It is theorized that black holes may fuse together to form a single black hole. Additionally, some black holes orbit each other so closely that it is hard tell the difference between the two.
Scientists discovered this trio of black holes by using VLBI (Very Long Baseline Interferometry). VLBI uses signals from radio telescopes separated by 10,000 KM to see details that even the Hubble Space Telescope couldn’t catch. Keith Grainge, a member of the research team, tells the news team at the University of Manchester about VLBI:
“This exciting discovery perfectly illustrates the power of the VLBI technique, whose exquisite sharpness of view allows us to see deep into the hearts of distant galaxies. The next generation radio observatory — the Square Kilometre Array — is being designed with VLBI capabilities very much in mind.”
The research team was very excited to find the black holes, especially since only four triple black-hole systems have ever been discovered. Roger Deane, the lead author of the report, tells National Geographic, “We were quite surprised to find it.”
Astronomers hope that this discovery will lead to further insight into how gravity behaves in extreme conditions. Their goal is to find out more about the elusive gravitational waves that ripple through space-time.
[Image via NASA/CXC/UCLA/Z.LI AND NRAO/VLA] | 0.868719 | 4.010725 |
Fr.: coquille circumstellaire
A shell of dust, molecules, and neutral gas around an evolved star resulting from an intensive mass loss phase, such as the asymptotic giant branch phase for low- and intermediate mass stars and LBVs or supernovae for massive stars.
double shell burning
suzeš-e puste-ye dotâyi
Fr.: combustion double coquille
A situation in the evolution of an → asymptotic giant branch star whereby both hydrogen and helium shells provide energy alternatively. As the burning → helium shell approaches the hydrogen-helium discontinuity, its luminosity decreases because it runs out of the fuel. As a consequence, the layers above contract in response, thus heating the extinguished → hydrogen shell until it is re-ignited. However, the shells do not burn at the same rate: the He burning shell becomes thermally unstable and undergoes periodic → thermal pulses.
Fr.: couche de Dyson
→ Dyson sphere.
puste-ye elekroni (#)
Fr.: couche éléctronique
Any of up to seven energy levels on which an electron may exist within an atom, the energies of the electrons on the same level being equal and on different levels being unequal. The number of electrons permitted in a shell is equal to 2n2. A shell contains n2 orbitals, and n subshells.
helium shell burning
suzeš-e puste-ye heliom
Fr.: combustion de la coquille d'hélium
A stage in the evolution of an → asymptotic giant branch star, when all the helium in the core is fused into carbon and oxygen. No more fusion takes place in the core, and as a result the core contracts. The core contraction generates a sufficient temperature for fusing the surrounding layers of helium. Since helium shell burning is unstable, it causes → helium shell flashes.
helium shell flash
deraxš-e puste-ye heliomi
Fr.: flash de la couche d'hélium
A violent outburst of energy that occurs periodically in an → asymptotic giant branch star. It occurs when helium is being burnt in a thin shell surrounding the inner dense core of carbon and oxygen. → Helium shell burning is unstable, producing energy mainly in short intense flashes. The shell flash causes considerable expansion of the star followed by collapse, thus setting up deep convection. As a consequence, the → convective zone in the outer part of the star goes deeper and may → dredge-up carbon to the surface. See also → late thermal pulse; → very late thermal pulse; → AGB final thermal pulse.
hydrogen shell burning
suzeš-e puste-ye hidrožen
Fr.: combustion de la coquille d'hydrogène
A phase in the life of a star that has left the → main sequence. When no more hydrogen is available in the core, the core will start to contract as it is no longer releasing the necessary energy whose pressure supports the surrounding layers. As a result of this contraction, gravitational energy is converted into thermal energy and the temperature will rise. Therefore a shell of unprocessed material surrounding the original core will be heated sufficiently for hydrogen burning to start. During the evolution of → asymptotic giant branch stars hydrogen shell burning occurs alternatively with helium shell burning. → double shell burning.
Newton's shell theorem
farbin-e puste-ye Newton
Fr.: théorème de Newton
In classical mechanics, an analytical method applied to a material sphere to determine the gravitational field at a point outside or inside the sphere. Newton's shell theorem states that: 1) The gravitational field outside a uniform spherical shell (i.e. a hollow ball) is the same as if the entire mass of the shell is concentrated at the center of the sphere. 2) The gravitational field inside the spherical shell is zero, regardless of the location within the shell. 3) Inside a solid sphere of constant density, the gravitational force varies linearly with distance from the center, being zero at the center of mass. For the relativistic generalization of this theorem, see → Birkhoff's theorem.
sadaf (#), kelâcak (#)
The hard shell of a marine mollusk.
Sadaf, loan from Ar. Kelâcak from Tabari, variant kelâcin, cf. Gilaki guš kuli. The component kel-, kul might be related to PIE *qarq- "to be hard," → crab.
Fr.: coquille; couche
1) General: A relatively thin external form covering a hollow space.
M.E.; O.E. sciell, scill "seashell, eggshell," related to O.E. scealu "shell, husk;" cf. W.Fris. skyl "peel, rind," M.L.G. schelle "rind, egg shell," Goth. skalja "tile;" PIE base *(s)kel- "to cut, cleave."
Pusté "shell," from pust "skin;" Mid.Pers. pôst "skin;" O.Pers. pavastā- "thin clay envelope used to protect unbaked clay tablets;" Av. pastô-, in pastô.fraθanhəm "of the breadth of the skin;" Skt. pavásta- "cover," Proto-Indo-Iranian *pauastā- "cloth."
Fr.: combustion en couche
The nuclear reactions in a shell around a star's core that continue after the fuel in the core itself has been exhausted. As the fuel is progressively exhausted, the shell moves outward until it enters regions too cool for the reactions to continue. For example, after the exhaustion of hydrogen in the core, helium burning might take place in the core with a shell of hydrogen burning surrounding it. Stars may have more than one region of shell burning during their stellar evolution, each shell with its own nuclear reactions. → hydrogen shell burning; → helium shell burning.
Fr.: galaxie en coquille
An elliptical galaxy that is surrounded by thin shells of stars which are thought to have been ejected during a galaxy merger. Shell galaxies are different from ring galaxies in that the shells are much further away from the galaxy's centre and much fainter than the rings. Spectroscopy of the stars in the shell show that they are old whereas the stars in a ring galaxy are young.
Fr.: étoile à enveloppe
A main-sequence star, usually of spectral class B to F, whose spectrum shows bright emission lines superimposed on the normal absorption lines. The emission spectrum is explained by the presence of a circumstellar shell of gas surrounding the star at the equator. Shell stars are fast rotators.
Fr.: rotation coquillaire
A rotation mode in which internal rotation of a star depends essentially on depth and little on latitude: Ω(r,θ) = Ω(r), where r is the mean distance to the stellar center of the considered level surface (or → isobar). This particular mode was introduced by J.-P. Zahn (1992, A&A 265, 115) to simplify the treatment of rotational → mixing, but also on more physical grounds. Indeed differential rotation tends to be smoothed out in latitude through → shear turbulence. See also → von Zeipel theorem; → meridional circulation .
Shellular, the structure of this term is not clear; it may be a combination of → shell (referring to star's assumed division in differentially rotating concentric shells) + (circ)ular, → circular. The first bibliographic occurrence of shellular is seemingly in Ghosal & Spiegel (1991, On the Thermonuclear Convection: I. Shellular Instability, Geophys. Astrophys. Fluid Dyn. 61, 161). However, surprisingly the term appears only in the title, and nowhere in the body of the article; → rotation.
pustey-e bâzmânde-ye abar-now-axtar
Fr.: coquille de reste de supernova
Fr.: sous couche
A set of electrons with the same angular momentum quantum number, denote l. The number of electrons permitted in a subshell is equal to 2l + 1. | 0.823683 | 3.901227 |
Just three years ago the prospect of finding temperate, rocky worlds around other stars was still the subject of science fiction: none had been found and reasonable estimates put us years or decades away from such a momentous discovery. All of that has changed very recently on the heels of the extraordinarily successful NASA Kepler mission. By searching for the tiny diminutions of starlight indicative of an eclipsing planet, Kepler has produced thousands of new planet candidates orbiting distant stars. Careful statistical analyses have shown that the majority of these candidates are bona fide planets, and the number of planets increases sharply toward Earth-sized bodies. Even more remarkably, many of these planets are orbiting right “next door,” around tiny red dwarf stars. I will describe our multi-telescope campaign to validate and characterize these tiny planetary systems, and present some early, exciting results that point the way to the first detection of the first Earth-sized planets in the habitable zones of nearby stars.
John Johnson, Harvard University | 0.819152 | 3.643826 |
Microbes or Aliens – The Case for the Existence of Life on Mars
Since time immemorial human beings have pondered the existence of intelligent life in the universe, with a keen interest in our neighboring planet.
The technological achievement of landing the Mars rovers on the surface of the red planet is one of mankind’s most incredible accomplishments. From the Opportunity, Spirit and Curiosity rovers, plus numerous satellite missions, we have gathered many photographic images and other data from the surface of Mars, and we now have an exceptional view of it’s some of its geologic features and makeup.
Recent photo analysis is recognizing that some of the rock features are surprisingly similar to formations here on earth that were shaped by living micro-organisms, opening up the possibility that there is, or was, indeed life on Mars. The Mars Curiosity Rover took photos of a formation on the surface of Mars called the Gillespie Lake outcrop, showing rock structures that are very similar to microbially-induced sediment structures (MISS) found on Earth, which in some cases have been dated to be up to 3.8 billion years old; some of the oldest geologic formations found our planet.
In a recent paper published in the journal Astrobiology, geo-biologist Nora Noffke at Old Dominion University in Virginia compared these sediments on Earth to those revealed in photos from the Curiosity, noting the similarities between the formations, pointing to the conclusion that at some time in the far distant past Mars must have had colonies of microbes that went extinct at some point. Her analysis is a critical piece of the puzzle in putting together the history of Mars.
“I’ve seen many papers that say ‘Look, here’s a pile of dirt on Mars, and here’s a pile of dirt on Earth,'” says Chris McKay, a planetary scientist at NASA’s Ames Research Center and an associate editor of the journal Astrobiology. “And because they look the same, the same mechanism must have made each pile on the two planets.'”
McKay adds: “That’s an easy argument to make, and it’s typically not very convincing. However, Noffke’s paper is the most carefully done analysis of the sort that I’ve seen, which is why it’s the first of its kind published in Astrobiology.” [Source]
This is only the most recent evidence that we’ve gathered which points to the conclusion that Mars is, or at least at some point was, a life-bearing planet.
This debate has heated up in the last couple of decades as NASA has produced many provocative images of anomalous rock formations, as well as detecting methane ‘burps’ on the planet, an organic chemical that is produced by either biological or non-biological sources. These puzzling spikes in methane emissions on Mars raise the question of whether or not there are active plumes still emitting gas that could be a by-product of water and organic life.
“NASA’s Mars Curiosity rover has measured a tenfold spike in methane, an organic chemical, in the atmosphere around it and detected other organic molecules in a rock-powder sample collected by the robotic laboratory’s drill.”
“This temporary increase in methane—sharply up and then back down—tells us there must be some relatively localized source,” said Sushil Atreya of the University of Michigan, Ann Arbor, and Curiosity rover science team. “There are many possible sources, biological or non-biological, such as interaction of water and rock.” [Source]
There is Water on Mars
NASA has also discovered water on Mars in the form of ice, and upon examination of geologic information, is speculating that at one point water was in free flowing abundance on the planet. In 2014 the Mars Reconnaissance Orbiter produced images that revealed “recurring slope lineae,” or dark flowing lines that appear to move down the slopes of Martian mountains, indicating the former presence of an abundance of water.
Additionally, many peculiar photos have been gathered that show unusual structures including strange looking craters, geometric shapes, pyramids, and other objects that appear to be artifacts or ruins on the Martian planet, many of which look very ‘alien’ and unlike any natural formations we see on planet earth.
Launched in 1975, NASA’s Viking I spacecraft was the first vehicle to land on the planet, taking startling images of rock formations on the surface of Mars, the most famous of which, taken in Mars’ Cydonia region, appears to be in the likeness of the human face. Although controversial, for many this raises the question: is this ‘face’ a naturally occurring structure, or is it evidence of an ancient civilization that once colonized the planet?
In 2014, images from the Mars Opportunity Rover captured an extraordinary anomaly when a rock, or some other object, appeared out of nowhere. Prior images of the same location just 4 days old showed no such rock, and NASA has offered the explanation that the rock was dislodged from a hilltop by the rover itself.
Our Ancient Origins
So much of the history of the human race has been lost or hidden from the masses, however, many independent thinkers and researchers have posited that there was once a vibrant alien colony on the planet, and that at some point something happened on Mars that harshly altered its environment. Physicist Dr. John Brandenburg recently made headlines with his claim that geo-chemical samples from the Martian planet reveal evidence of major nuclear fallout, a result of a massive nuclear war, which was responsible for the destruction of its biosphere and the ancient civilizations that inhabited Cydonia and Utopia.
The questions about the existence of life on Mars are puzzling both scientists and journalists, yet with each new piece of data that is recovered, the debate only grows. Could it be that there once was an advanced humanoid civilization on our neighboring planet, and that Earth itself is in danger of suffering the same terrible fate which laid waste to Mars?
About the Author
Buck Rogers is the earth bound incarnation of that familiar part of our timeless cosmic selves, the rebel within. He is a surfer of ideals and meditates often on the promise of happiness in a world battered by the angry seas of human thoughtlessness. He is a staff writer for WakingTimes.com.
~~ Help Waking Times to raise the vibration by sharing this article with friends and family… | 0.897233 | 3.237291 |
June 2014 saw excited reports that NASA was working on a faster than light warp drive starship. Astonishingly, weeks later we are being told that NASA has also successfully tested a device which could push along a space vehicle without consuming any propellant. If true, this would be an astonishing discovery, not only violating laws which are cornerstones of science but also possibly allowing easy access to the worlds of the Solar System. But are these latest reports correct?
(For the latest developments in this affair see NASA’s Space Drive: the Plot Thickens (link).
In physics, momentum is a quantity obtained by multiplying a body’s mass and velocity (velocity is not just speed, it is speed in a set direction- an important distinction). Both theory and centuries of practice indicate that momentum is conserved; essentially meaning that it is never created or destroyed. Let me illustrate this with a pertinent example.
Imagine a spacecraft floating in empty space. Inside it are tanks of propellant, say liquid hydrogen and oxygen, and a rocket motor. When the craft’s motor is turned on, the hydrogen and oxygen are burned together in the combustion chamber, creating hot gases which are allowed to escape at very high speed out a nozzle, pushing the space craft forward. Looking more closely, every second the motor operates, a relatively small mass of gas is emitted at high speed out of the back of the spacecraft as the exhaust. A small mass of gas multiplied by a high speed rearward yields a significant momentum in that direction. To balance the books (conserve momentum), the spacecraft must move with an equal and opposite momentum, so it shoots forward (its mass will be greater than the gas in the exhaust, so its velocity will be lower, but the spacecraft’s velocity will keep building up as long as the rocket motor is fed propellant. The spacecraft’s motion in response to the escaping propellent is termed a ‘reaction’. A rocket motor is a reaction engine (or “drive” in science fiction parlance).
Momentum conservation is predicted by Isaac Newton’s laws of motion (and in modified form Einsteinian relativity) and is observed throughout science and utilised in engineering all the way from collisions of subatomic particles to launching probes to the planets.
However, rockets are clumsy and inefficient; to accelerate to meaningful speeds vast quantities of propellant must be carried and consumed. Perhaps 90% of a rocket’s mass at launch is propellant, perhaps only 10% structure and payload. This is a sad fact, meaning rockets to send missions into to space must always be behemoths, suggesting space travel will forever be difficult and expensive. What if there was an easier way? Could there be entirely new physics (or “loopholes” in existing physics) permitting a “reactionless drive” which would run solely on electric power without carrying any messy and bulky propellant ? A spacecraft with a reactionless thruster would be a space enthusiast’s dream, rising silently into the sky without the sound and fury of a rocket launch, permitting a probe or even a spacecraft with a human crew to roam the planets. Unfortunately this seems impossible. Yet some disagree.
Dozens if not hundreds of concepts for reactionless drives have been proposed, the vast majority being the fantasies of science fiction authors or crackpots or the lies of scammers. However, this is not always the case. Roger J. Shawyer , a British aerospace engineer with impeccable professional qualifications has proposed a device he calls an EmDrive.
Shawyer’s EmDrive thruster is a magnetron, a microwave generator, inside a specially shaped, tapering resonant cavity whose area is greater at one end. Both ends of the cavity are sealed. Essentially an EmDrive unit is a metal can with a microwave source inside. When it is turned on, the EmDrive’s magnetron emits microwaves which bounce around inside the cavity pushing against its sides. According to Shawyer, thanks to the cavity’s shape there is a slight imbalance in the pressure exerted by the microwaves which manifests as a thrust, hence the thruster moves without emitting any exhaust. An alternative name for the concept is RF resonant cavity thruster.
Electricity is apparently being turned directly into thrust in defiance of the conservation of momentum law. Shawyer believes that his concept’s behaviour is permitted under Einstein relativity (hence the device is actually called a “relativity drive” by some) and he insists it obeys Newton’s laws and conserves momentum. He has written highly mathematical papers to justify this and claims to have successfully tested prototypes. Eureka magazine’s website has a video of an EmDrive being demonstrated. Shawyer has created a company (with the help of a £45 000 grant from the UK’s Department of Trade and Industry) to develop this technology. Shawyer has shared his beliefs on the theory and potential of his device in a series of videos.
Shawyer’s proposal has received some positive coverage in engineering journals and websites but not from many scientific publications (apart from New Scientist, which positively gushed enthusiasm). The science community has been largely reluctant to repeat Shawyer’s research because his theoretical justification sounds frankly absurd. All electromagnetic waves, such as microwaves, possess momentum. This means that a beam of microwaves does indeed exert thrust and you could actually make a grossly inefficient rocket based on the principle of an exhaust of microwaves alone (it would in fact be a form of photon rocket), but that is not what Shawyer claims to have invented. The microwaves are trapped in his device, and do not escape as an exhaust, making it reactionless.
Take one of those little RC helicopters you can fly indoors. Imagine getting an incredibly light-weight cardboard box, putting the helicopter inside and sealing the lid before turning the helicopter on. Will the box rise into the air thanks to the spinning rotor inside? This is comparable to what Shawyer claims his device does.
Since I wrote the above paragraph I have read Shawyer’s document A Note on the Principles of EmDrive force measurement, which muddies the waters considerably. In it Shawyer claims his device cannot generate thrust when at rest, instead it must be in accelerating motion. If correct this means that you cannot measure an EmDrive’s thrust by it placing in on a balance (Shawyer explicitly states this), instead it must be accelerated by an external force while the measurement takes place. This is both inconvenient for experimenters and really odd physically.
The physics community mostly believes Shawyer is profoundly mistaken (laying my cards on the table, I would agree with this opinion). However if a prototype were to be tested in space conditions and work as advertised then physicists, scenting a Nobel prize, would really pay attention.
Other experimenters have indeed attempted to duplicate Shawyer’s research. The Boeing aerospace company has investigated Shawyer’s technology but this does not seem to have led anywhere. Juan Yang , a professor of propulsion theory and engineering of aeronautics and astronautics at Northwestern Polytechnical University (NWPU) in Xi’an, China, claimed to have tested a high power EmDrive on a rocket motor test rig in 2010. Yang’s published data suggests the EmDrive passed its tests with flying colours, but she has not convinced many others to revisit Shawyer’s brainchild (see updates at end of article).
Another inventor, Guido P. Fetta has suggested a similar device to the EmDrive that he has called the Cannae drive (confusingly also known as the Q-drive). Fetta, with a “background as a sales and marketing executive with more than 20 years of experience in the chemical, pharmaceutical and food ingredient industries”, owns a company called Cannae LLC to exploit his research. Although the Cannae device is also essentially a metal can with a microwave source inside some report that it is intended to operate under entirely different principles to the EmDrive perhaps exploiting quantum mechanics to violate the laws of classical physics. I cannot verify this as the Cannae website has nothing to say about it.
The Cannae device is a thick disc-shaped resonant cavity with radial slots in one inside face, according to its inventor these are vital to produce an internal force imbalance leading to an external thrust. I recommend everyone read the patent for the Cannae device which discusses how it could be applied for “energy harvesting”, suggesting Fetta believes he has also invented a free-energy device. This makes the concept self-confessed nonsense.
Bringing the story up to date, in 2013-14 a team from NASA’s advanced propulsion thinktank, the Eagleworks Laboratories, tested Cannae drive and “tapered cavity” devices with interesting results. These were published in the paper Anomalous Thrust Production from an RF Test Device Measured on a Low-Thrust Torsion Pendulum. The experimenters describe how they placed drive units on a torsion pendulum capable of detecting thrusts “at a single-digit micronewton level” in a stainless steel vacuum chamber. When the Cannae devices were supplied with around 30 watts of power, the tests measured them to generate 30-50 micro-Newtons of thrust. These are fantastically small forces, equivalent perhaps to the weight of a sand grain, measuring them alone is an achievement as the environment is full of noise (such as the footsteps of passersby) that could swamp this signal .
The experiments with the tapered cavity device (which is not called an EmDrive in the paper) found that “the presence of some sort of dielectric RF resonator in the thrust chambers” was essential to observe a thrust from the device. When it worked the authors saw an average thrust of 91.2 micro-Newtons generated for an input power of about 17 watts. This means this device has a “thrust to power ratio” of 5.3 micro-Newtons per watt, this statistic is rather esoteric , but it will be important later.
After describing the experiments and their results, the paper suggests refinements to both the authors’ techniques and their equipment for further investigation. The team’s paper ends by discussing in detail possible human space missions to the moons of Mars and Saturn that would be possible if a reactionless drive based on improved scaled up versions of their test articles were to be used. To space buffs and science fiction fans (and I am both) these projected voyages are a mouth-watering prospect.
Despite the tiny measured thrusts, this is a startling announcement. The NASA researchers seem to have found a flaw in a centuries old central dogma of science, opening the possibility of a wonderful new era of interplanetary travel. This seems to be news worthy of the attention it is receiving. Sadly it is not as simple as that. In fact I am rather dubious and here are my reasons to be sceptical.
- The researchers describe the vacuum chamber used in the experiment in a lot of detail (in the section “II. Thrust Measurement System Torsion Pendulum” of their paper), yet the testing was not actually conducted in a vacuum, rather with the vacuum chamber “door closed but at ambient atmospheric pressure”. This was because the capacitors used in the test devices could not survive vacuum conditions, I presume this was a last minute discovery but the test programme went ahead regardless. The section “VI. Summary and Forward Work” recommends that future tests be performed in a vacuum. Not performing these tests in a vacuum is a serious blow to the experiment’s credibility. The slightest air current could interfere with so slight a measurement. I originally suggested that the electrical current fed to the drive device was generating heat which caused convectional air currents, moving the device on its pendulum. The paper seems to indicate thrust occurs instantly when the power is applied and drops immediately to zero when the power is cut off. That seems to suggest the device is not effected by self-generated convectional currents.(UPDATE: in February 2015 one of the Eagleworks team, Paul March, reported the tests have now been repeated in a vacuum obtaining measured thrusts of about 50 micronewtons, March says if they can obtained thrusts of at least 100 micronewtons there will be attempt to replicate these results at NASA’s Glenn Research Center.)
- The research team also tested a Cannae device designed to accept electrical power but not to function as thrust-generating unit. To make it inoperable it was manufactured without the slots its inventor believes to be essential for its operation.Yet the team measured a force generated from this device too! This non-functional device was not an experimental control, instead the researchers also tested an RF load with no functioning components -a resistor – and indeed measured zero thrust for that test. It is extremely odd that a device designed by its creator to be inoperable “works” just as well as “functional” devices.
- The team suggest this is not actually reactionless propulsion (indicating that they know how outrageous this would be) but rather momentum is being transferred “via the quantum vacuum virtual plasma”. This sounds profoundly impressive but it is also scarily like Star Trek-style technobabble. To the best of my knowledge quantum mechanics predicts that all space is permeated by “sea” of virtual particles but I have never seen this described as “plasma” before. It is also intriguing that this hypothesis has absolutely no common ground with how Shawyer claims his EmDrive should work. Shawyer says an EmDrive must be accelerated by an external force while the measurement takes place- so according to him the NASA experimenters should not have seen a thrust from their stationary devices!
- Harold White, a team member, has, shall I say, form in presenting his team’s research in a prematurely positive way.
- Most damning in my opinion, is the reported “thrust to power ratio” of 5.3 micro-Newtons per watt. Say the device was not a closed cavity after all and instead just squirted out microwaves. As mentioned earlier, the microwave beam would actually act a rocket exhaust. You can calculate the thrust to power ratio of such a beam, it comes out as 3.3 nano-Newtons per watt . This very low efficiency is a consequence of physical law and is the best that can ever be achieved. Yet the NASA team claim to have observed an efficiency about 1500 times greater! This is seems impossible.
While I’m at it, can I also clarify some misconceptions about this technology:
- It is not an anti-gravity device
- It is not in any way based on the work of Nikola Tesla
- It is not based on “flying saucer” technology
It doesn’t prove “Einstein was wrong”(on reflection, actually it does)
- It has nothing to do with “Electric Universe” Theory (don’t even ask!)
I would love this to be real, as it would be the greatest step forward in space travel ever, sadly over the years I have seen so many such steps come, go and disappear without a trace. Once again I am sorry to throw cold water on so exciting a story but in short, the concept of reactionless propulsion is still as impossible as it has ever been. NASA has not overturned Newtonian dynamics. A small-scale research project inside NASA has tested a device based on exotic, if not fringe, science, claimed to see anomalous results and placed these forward for scrutiny. Perhaps more research will show this to be nothing real or even verify these findings with exciting results. Let’s wait and see.
UPDATES: In July 2015 researchers in Germany reported further inconclusive tests on a EMdrive style device. Although some excited reports have claimed this proves the device’s validity, the authors claim their test do not confirm or refute this.
In 2016 Yang et al. apparently published a paper reporting that a greatly improved experimental setup failed to observe thrust from their devices.
In November 2016 White et al. published a paper Measurement of Impulsive Thrust from a Closed Radio-Frequency Cavity in Vacuum describing a test program with their devices in a vacuum, reporting 9 measurements of thrusts in the 30-119 micro-Newton range and proposing this effect to be a consequence of pilot-wave theory, a unconventional interpretation of quantum effects. I am eager to see if these results can be replicated by other researchers.
(Article by Colin Johnston, Science Education Director)
(Last update 4 November 2015) | 0.801135 | 3.770094 |
A black hole lying just 1000 light-years from Earth has been discovered by a team of astronomers from the European Southern Observatory. It is known to be closer to our Solar System than any other found to date and forms part of a triple system that can be seen with the naked eye. Located in the constellation of Telescopium, it can be viewed from the southern hemisphere on a dark, clear night without a telescope or binoculars. What it is truly formidable is this is the first stellar system with a black hole that can be seen with the unaided eye.
Astronomers have also found a star that survived being swallowed by a black hole.
A black hole’s behaviour is known to be normal when they belch out tremendous flares of X-rays, generated by the material heating to intense temperatures as it’s sucked towards the black hole, so bright we can detect them from Earth. What isn’t normal is for those X-ray flares to spew forth with clockwork regularity, a puzzling behaviour reported last year from a supermassive black hole at the center of a galaxy 250 million light-years away. Every nine hours, boom – X-ray flare. After some studies, it was discovered that a dead star was the cause. It endured its brush with a black hole, trapped on a nine-hour, elliptical orbit around it.
“In astronomical terms, this event is only visible to our current telescopes for a short time – about 2,000 years, so unless we were extraordinarily lucky to have caught this one, there may be many more that we are missing elsewhere in the Universe,” – Andrew King, Professor of Theoretical Astrophysics in the University’s School of Physics and Astronomy. | 0.921094 | 3.635884 |
February 29, 2016 – Immediately after its 2008 launch, NASA’s Interstellar Boundary Explorer (IBEX), spotted a curiosity in a thin slice of space: More particles streamed in through a long, skinny swath in the sky than anywhere else. The origin of the so-called IBEX ribbon was unknown – but its very existence opened doors to observing what lies outside our solar system, the way drops of rain on a window tell you more about the weather outside.
Now, a new study uses IBEX data and simulations of the interstellar boundary – which lies at the very edge of the giant magnetic bubble surrounding our solar system called the heliosphere – to better describe space in our galactic neighborhood. The paper, published February 8, 2016, in The Astrophysical Journal Letters, precisely determines the strength and direction of the magnetic field outside the heliosphere. Such information gives us a peek into the magnetic forces that dominate the galaxy beyond, teaching us more about our home in space.
The new paper is based on one particular theory of the origin of the IBEX ribbon, in which the particles streaming in from the ribbon are actually solar material reflected back at us after a long journey to the edges of the sun’s magnetic boundaries. A giant bubble, known as the heliosphere, exists around the sun and is filled with what’s called solar wind, the sun’s constant outflow of ionized gas, known as plasma. When these particles reach the edges of the heliosphere, their motion becomes more complicated.
“The theory says that some solar wind protons are sent flying back towards the sun as neutral atoms after a complex series of charge exchanges, creating the IBEX ribbon,” said Eric Zirnstein, a space scientist at the Southwest Research Institute in San Antonio, Texas, and lead author on the study. “Simulations and IBEX observations pinpoint this process – which takes anywhere from three to six years on average – as the most likely origin of the IBEX ribbon.”
Outside the heliosphere lies the interstellar medium, with plasma that has different speed, density, and temperature than solar wind plasma, as well as neutral gases. These materials interact at the heliosphere’s edge to create a region known as the inner heliosheath, bounded on the inside by the termination shock – which is more than twice as far from us as the orbit of Pluto – and on the outside by the heliopause, the boundary between the solar wind and the comparatively dense interstellar medium.
Some solar wind protons that flow out from the sun to this boundary region will gain an electron, making them neutral and allowing them to cross the heliopause. Once in the interstellar medium, they can lose that electron again, making them gyrate around the interstellar magnetic field. If those particles pick up another electron at the right place and time, they can be fired back into the heliosphere, travel all the way back toward Earth, and collide with IBEX’s detector. The particles carry information about all that interaction with the interstellar magnetic field, and as they hit the detector they can give us unprecedented insight into the characteristics of that region of space.
“Only Voyager 1 has ever made direct observations of the interstellar magnetic field, and those are close to the heliopause, where it’s distorted,” said Zirnstein. “But this analysis provides a nice determination of its strength and direction farther out.”
The directions of different ribbon particles shooting back toward Earth are determined by the characteristics of the interstellar magnetic field. For instance, simulations show that the most energetic particles come from a different region of space than the least energetic particles, which gives clues as to how the interstellar magnetic field interacts with the heliosphere.
For the recent study, such observations were used to seed simulations of the ribbon’s origin. Not only do these simulations correctly predict the locations of neutral ribbon particles at different energies, but the deduced interstellar magnetic field agrees with Voyager 1 measurements, the deflection of interstellar neutral gases, and observations of distant polarized starlight.
However, some early simulations of the interstellar magnetic field don’t quite line up. Those pre-IBEX estimates were based largely on two data points – the distances at which Voyagers 1 and 2 crossed the termination shock.
“Voyager 1 crossed the termination shock at 94 astronomical units, or AU, from the sun, and Voyager 2 at 84 AU,” said Zirnstein. One AU is equal to about 93 million miles, the average distance between Earth and the sun. “That difference of almost 930 million miles was mostly explained by a strong, very tilted interstellar magnetic field pushing on the heliosphere.”
But that difference may be accounted for by considering a stronger influence from the solar cycle, which can lead to changes in the strength of the solar wind and thus change the distance to the termination shock in the directions of Voyager 1 and 2. The two Voyager spacecraft made their measurements almost three years apart, giving plenty of time for the variable solar wind to change the distance of the termination shock.
“Scientists in the field are developing more sophisticated models of the time-dependent solar wind,” said Zirnstein.
The simulations generally jibe well with the Voyager data.
“The new findings can be used to better understand how our space environment interacts with the interstellar environment beyond the heliopause,” said Eric Christian, IBEX program scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, who was not involved in this study. “In turn, understanding that interaction could help explain the mystery of what causes the IBEX ribbon once and for all.”
The Southwest Research Institute leads IBEX with teams of national and international partners. NASA Goddard manages the Explorers Program for the agency’s Heliophysics Division within the Science Mission Directorate in Washington. | 0.848314 | 4.076288 |
The video looks like YouTube clickbait—Worst Drone Pilot Ever. A small, spindly rotorcraft darts maniacally around an enclosed chamber like a crazed winged insect until it finally, inevitably, goes splat! “The crash was phenomenal,” says MiMi Aung, laughing infectiously. “It’s a really fun video.”
Entertaining but also enlightening. The video demonstrated that it is theoretically possible for a helicopter to stay aloft on Mars—in an atmosphere 100 times thinner than Earth’s—and helped convince NASA to greenlight development of the first aircraft to be dispatched to another planet. Eighteen months later, in the same 25-foot chamber at the Jet Propulsion Laboratory (JPL) in Pasadena, California, a larger autonomous model hovered, rotated, and moved from side to side without a glitch. The team behind the Mars Helicopter calls it their Wright brothers-at-Kitty Hawk moment.
Earlier this year, another model, the flight model, was tested successfully in the chamber. Though weighing less than four pounds, this full-size vehicle contains everything necessary to fly autonomously on Mars. A few months from now, it will be attached to the belly pan of a six-wheeled rover in preparation for the Mars 2020 mission. If all goes well, the rotorcraft will be deployed on the Martian surface in 2021, where, over the course of 30 days, it will attempt five historic flights. “This is a game-changer,” says Aung, who is the project manager for the Mars Helicopter. “Right now, we explore deep space from orbit or with rovers, but we don’t have any vehicles taking advantage of the aerial dimension. This will allow us to get to places we can’t get to with rovers—or even with astronauts.”
Gravity on the Red Planet is roughly 38 percent of what it is here on Earth, but the thin atmosphere presented what a lot of smart people believed was a show-stopper. Less air, of course, means less lift. For conventional airplanes, this translates into extremely long, high-speed takeoff runs, and landings would be even more daunting. But a helicopter seemed like a potential alternative since, unlike a fixed-wing airplane, its blades create their own airflow to generate lift. As Aung explains, “A helicopter/rotary-wing craft can build up the required airspeed over the blades while standing stationary.”
In theory, at least. But as a practical matter, nobody knew whether the blades could be spun fast enough to support a fuselage housing motors, avionics, and flight controls, not to mention cameras, radios, and antennas—in short, all the components needed to make the craft useful on Mars. To fully appreciate the challenge, consider that the highest altitude ever achieved by a helicopter on Earth was about 40,000 feet. On Mars, the thin atmosphere would be the equivalent of attempting to fly a helicopter at 100,000 feet.
Larry Young, a rotorcraft wonk at NASA’s Ames Research Center in Silicon Valley, began studying the issue in 1997. “Initially, I was a little bit skeptical,” he says. “However, with additional first-order analysis using insights from aerodynamics for micro-rotorcraft and other micro aerial vehicles—which I was also studying at the time—plus weight-trend information derived from HALE [high-altitude and long-endurance] aircraft, I concluded that a Mars helicopter of less than 100 kilograms [220 pounds] might indeed be possible.”
Hover tests of a four-bladed, eight-foot-diameter, ultra-lightweight rotor built by Micro Craft were conducted in the N-242 environmental test chamber at Ames. The success of the program inspired Young to publish a paper trumpeting the viability of the concept.
Meanwhile, independently, JPL engineer Bob Balaram attended a robotics conference in San Francisco. During a presentation about proposed miniature helicopters, Balaram realized that the Reynolds number—which expresses the performance of an airfoil based on the density and viscosity of the air it’s operating in—of these miniature “mesicopters” would be roughly the same as a helicopter with a bigger wing in thinner air. “So it kind of scaled,” he says.
Stanford University provided an eight-inch-diameter rotor that was mounted on a pivot in JPL’s 10-foot vacuum chamber, which was pumped down to simulate the atmosphere on Mars. When the blades rotated at 7,000 rpm, the pivot changed angle, demonstrating that sufficient lift could be generated to fly in the Martian environment—assuming, of course, that the vehicle were light enough. Alas, funds for the program never materialized.
The idea remained on the shelf until 2012, when Aung was leading then JPL director Charles Elachi on a tour of the Autonomous Systems Division. In one of the labs, drones were being used to demonstrate onboard navigation algorithms. Elachi turned to chief financial officer René Fradet and asked, “Hey, why don’t we do that on Mars?” Balaram dusted off his old research and briefed Elachi about his findings. After mulling it over for a week, Elachi told Balaram, “Okay, I’ve got some study money for you.”
Right from the start, the engineers faced a dual challenge: designing a helicopter that could fly on Mars while also surviving the “seven minutes of terror” of landing on the Red Planet attached to the bottom of a one-ton rover. Packaging considerations limited the length of the blades to no longer than 1.2 meters (four feet).
JPL commissioned AeroVironment, the advanced-engineering firm in Southern California that had created one of the first drones to be deployed in combat—the FQM-151 Pointer—to build a one-third-scale test model. The vehicle was mounted on a vertical rail in JPL’s 25-foot vacuum chamber, a National Historic Landmark officially known as the Space Simulator, where every spacecraft built at the facility since 1962 has been tested. With the blades rotating at 8,000 rpm (to compensate for the small scale), the drone produced enough lift to climb off the ground. This prompted JPL to invest a bit more money to see how well the model could fly without a rail to keep it on the straight and narrow. With Matt Keennon, AeroVironment’s top drone jock, operating a joystick, the model flew freely around the chamber. Too freely, actually, hence the spectacular crash in the Worst Drone Pilot Ever video.
“Rotorcraft are very hard to model,” says Håvard Fjær Grip, a fixed-wing pilot who leads the guidance, navigation, and control team for the Mars Helicopter. “When we flew the vehicle, we anticipated that it wasn’t going to behave exactly as the [computer] model, and it didn’t. But it did what it was supposed to do, showing that it could produce enough thrust to get off the ground.”
On the strength of the one-third-scale vehicle tests, NASA agreed in January 2015 to fund the development of a full-size iteration, which came to be known as the “risk reduction” vehicle. As project manager, Aung realized the program demanded a multidisciplinary structure. She assembled a team of scientists, engineers, and technicians leveraging all of NASA’s expertise. In terms of full-time-equivalent employees, the head count never exceeded 65, but Aung says more than 150 people worked on the program at JPL, AeroVironment, and the NASA research centers at Ames and Langley.
Every decision about the design of the helicopter was filtered through the prism of mass. Balaram, an inveterate backpacker whose office wall is dominated by a dramatic landscape photo shot at one of his campsites, put the vehicle on a diet. “I used the same philosophy that I use with my backpack,” he says. Each additional gram meant more energy to support it, which added even more weight, which required even more energy—a vicious cycle that threatened the entire project. “There was a time when we had a bit of a runaway mass problem, and it was a challenge to stay within the bounds dictated by the physics,” Balaram acknowledges.
To fit snugly beneath the Mars rover, the team rejected a conventional tail rotor in favor of a co-axial design featuring two horizontal blades rotating in opposite directions to cancel out their respective torque. The rotors are mounted on a central mast. At the top of the mast, a solar array harvests energy for the lithium-ion batteries. At the bottom is a small cube, about two feet square. A smaller hard-shell container inside this fuselage houses most of the hardware (and software). Four narrow but flexible carbon-fiber legs serve as the low-tech landing system.
The helicopter carries eight electric motors. The rotors are powered by a pair of custom-made 23-pole solid-state DC units filled with square copper wire that AeroVironment’s Keennon hand-wound using a microscope—an incredibly tedious process that took 100 hours per motor. The other six motors drive the swashplates attached to each of the rotors. (Swashplates change the angle of the blades to allow the helicopter to pitch, rotate, and yaw.)
The lithium-ion battery system comes from the vaping industry. Cellphone technology provided the high-level processor and the cameras—a 13-megapixel color device for taking high-resolution photos and a black-and-white camera to provide data for the relatively crude visual navigation system. The low-level processor comes from the automotive world; the laser rangefinder, from robotic applications. “This wouldn’t have been possible without commercial off-the-shelf components,” Grip says.
Even so, just about every design solution created a new problem. For example, the blades were made of lightweight carbon fiber so they could be spun as fast as possible. But they had to be beefed up when it turned out that the thin atmosphere on Mars lacked the natural damping qualities that reduce vibration and catastrophic resonances here on Earth. Early on, the team had hoped to save a few grams by fitting the upper rotor with a collective, which controls altitude, but no cyclic, which controls pitch and roll. (The lower rotor has both.) But these plans were scrapped when it became apparent that the helicopter needed more control authority. So the team “sharpened their tools,” as Aung puts it, to reduce weight elsewhere.
In May 2016, Aung and company were confident enough to set the Martian helicopter free in the 25-foot vacuum chamber. To compensate for Earth’s gravity, the model had to be scaled down to 850 grams (1.9 pounds), so the power system, computers, and avionics were removed from the vehicle and connected to the fuselage through a long tether.
The fully autonomous risk-reduction flight went off without a hitch. And yet team members sat stone-faced as they watched. “People said, ‘You guys were so under control,’ ” Aung says, recalling the friendly jibes from colleagues. “But we were so happy. There is no way to describe how we felt.”
The success elevated the program from a blip on NASA’s radar to something worth watching more closely. A new round of funding financed a pair of engineering design models (EDMs), which would serve as the template for the actual Mars Helicopter—if, in fact, NASA decided to build it. But even as construction began, the team discovered that it had miscalculated the thermal budget.
Two thirds of the energy produced by the batteries was earmarked for keeping them warm during the frigid Martian nights, and this proved to be tougher than expected. The solar panel was enlarged, and extra battery cells were added, which meant more weight. Fortunately, Keennon’s motors turned out to be more energy-efficient than projected, which saved some mass. Ultimately, the vehicle weighed a tick less than four pounds.
On January 9, 2018, EDM-1 went into the Space Simulator at JPL. Besides being pumped down to simulate the anemic Martian atmosphere, the chamber was filled largely with carbon dioxide to mimic the Red Planet’s oxygen-poor environment. To simulate Martian gravity, the vehicle was attached to a tether known as a gravity offload system, which essentially gave EDM-1 a helping hand staying aloft. The test flight was a slam dunk. But this time, instead of receiving a figurative gold star from NASA, the team got the brass ring—a highly coveted slot as a technology demonstrator on the Mars 2020 mission.
NASA and JPL scientists have already started brainstorming about the future of Martian helicopters. Aung says the thin atmosphere will probably limit them to 10 to 15 kilograms (22 to 33 pounds), which means they won’t be large enough to carry human beings. But they’ll be fitted with scientific instruments as heavy as two-plus pounds. “Future versions of Mars rotorcraft will be able to perform sustained missions that will cover regions of otherwise inaccessible terrain and to perform ‘surface interactive’ science campaigns such as soil and rock analysis and sample retrieval,” Young says. “Such vehicles could enter craters, could fly near cliff faces or other large rock formations, or do low-altitude surveys of ancient riverbeds and deltas.”
That said, the centerpiece of the Mars 2020 expedition will not be the helicopter but the rover carrying it into space. While the rover’s mission is nominally expected to last nearly two Earth years, the helicopter has been allotted only 30 days to do its thing. The primary goal is simply to demonstrate the feasibility of the concept. Hopes are high, but expectations are not. “We’re not a flagship mission with a billion-dollar budget,” Balaram says. “We are a very lean, small-tech demo—high risk, high fail.”
The batteries limit the flight time to 90 seconds, and there’s enough energy to launch only once a day. Airspeed will top out at 10 meters per second (about 22 miles per hour)—two meters per second of ground speed while allowing for up to eight meters per second of wind—and the altitude will be capped at 16.5 feet. The helicopter’s ability to map terrain is marginal, so it will land on the flat and open spot from which it launched. The most ambitious flight on the schedule calls for the vehicle to fly 500 feet before returning to the launch site.
And after that? TBD. It’s not clear how many cold cycles the components can survive, and there’s no way the helicopter can endure the brutal Martian winter. As Aung puts it, “It’s built to tech demo standards, and it’s not meant to last forever.” Then again, the vehicles JPL has built for Mars have earned a reputation for going above and beyond the call of duty. The Spirit and Opportunity rovers, designed to survive about three months apiece, thrived for six years and 14 years, respectively. So, Balaram says the team has already dubbed the helicopter Wendy.
Wendy? Why Wendy?
Balaram grins and explains: “For ‘we’re not dead yet.’” | 0.835156 | 3.045833 |
This time next year, ESA’s Huygens spaceprobe will be descending through the atmosphere of Saturn’s largest moon, becoming the first spacecraft to land on a body in the outer Solar System.
Earlier this month, the giant ringed planet Saturn was closer to Earth than it will be for the next thirty years. All the planets orbit the Sun as if on a giant racetrack, travelling in the same direction but in different lanes.
Those in the outer lanes have further to travel than those on the inside lanes. So, Earth regularly ‘laps’ the further planets. On New Year’s Eve 2003, Earth overtook Saturn, drawing closer than at any time in the next three decades.
Through a small telescope, Saturn is normally visible as a creamy yellow ‘star’. You may be able to see the ring system that the planet is famous for, and its largest moon Titan will show up as a tiny dot of light.
That tiny dot is the destination for ESA’s Huygens probe and may hold vital clues about how life began on Earth. Titan is the only moon with a thick atmosphere in the Solar System.
Astronomers think this atmosphere might closely match the one Earth possessed millions of years ago, before life began. Certainly Titan’s atmosphere is rich in carbon, the chemical necessary for life on Earth. What is more, this is all stored in ‘deep freeze’, ten times further from the Sun than the Earth.
The big mystery is Titan’s surface, which is hidden by a cloud layer. This is why ESA built Huygens, to probe through this layer which is impenetrable by Earth-based observations.
In January 2005, Huygens will parachute below the clouds to see what is really going on. Its battery of instruments will return over 1000 images as it floats down and samples the chemistry of this exotic place.
The Titan probe was named Huygens in honour of the Dutch astronomer who discovered Titan in 1655. Launched in October 1997, Huygens is currently in space, hitching a ride on NASA’s Cassini spacecraft.
So look forward to seeing more of Saturn and a tiny European spacecraft called Huygens, that in one year’s time will make an historic landing in the quest to uncover the origins of life. | 0.852169 | 3.562457 |
Solar thermal rocket
A solar thermal rocket is a theoretical spacecraft propulsion system that would make use of solar power to directly heat reaction mass, and therefore would not require an electrical generator, like most other forms of solar-powered propulsion do. The rocket would only have to carry the means of capturing solar energy, such as concentrators and mirrors. The heated propellant would be fed through a conventional rocket nozzle to produce thrust. Its engine thrust would be directly related to the surface area of the solar collector and to the local intensity of the solar radiation.
In the shorter term, solar thermal propulsion has been proposed both for longer-life, lower-cost, more efficient use of the sun and more-flexible cryogenic upper stage launch vehicles and for on-orbit propellant depots. Solar thermal propulsion is also a good candidate for use in reusable inter-orbital tugs, as it is a high-efficiency low-thrust system that can be refuelled with relative ease.
Solar-thermal design concepts
There are two solar thermal propulsion concepts, differing primarily in the method by which they use solar power to heat up the propellant:
- Indirect solar heating involves pumping the propellant through passages in a heat exchanger that is heated by solar radiation. The windowless heat exchanger cavity concept is a design taking this radiation absorption approach.
- Direct solar heating involves exposing the propellant directly to solar radiation. The rotating bed concept is one of the preferred concepts for direct solar radiation absorption; it offers higher specific impulse than other direct heating designs by using a retained seed (tantalum carbide or hafnium carbide) approach. The propellant flows through the porous walls of a rotating cylinder, picking up heat from the seeds, which are retained on the walls by the rotation. The carbides are stable at high temperatures and have excellent heat transfer properties.
Due to limitations in the temperature that heat exchanger materials can withstand (approximately 2800 K), the indirect absorption designs cannot achieve specific impulses beyond 900 seconds (9 kN·s/kg = 9 km/s) (or up to 1000 seconds, see below). The direct absorption designs allow higher propellant temperatures and therefore higher specific impulses, approaching 1200 seconds. Even the lower specific impulse represents a significant increase over that of conventional chemical rockets, however, an increase that can provide substantial payload gains (45 percent for a LEO-to-GEO mission) at the expense of increased trip time (14 days compared to 10 hours).
Small-scale hardware has been designed and fabricated for the Air Force Rocket Propulsion Laboratory (AFRPL) for ground test evaluation. Systems with 10 to 100 N of thrust have been investigated by SART.
Reusable Orbital Transfer Vehicles (OTV), sometimes called (inter-orbital) space tugs, propelled by solar thermal rockets have been proposed. The concentrators on solar thermal tugs are less susceptible to radiation in the Van Allen belts than the solar arrays of solar electric OTV.
Most proposed designs for solar thermal rockets use hydrogen as their propellant due to its low molecular weight which gives excellent specific impulse of up to 1000 seconds (10 kN·s/kg) using heat exchangers made of rhenium.
Conventional thought has been that hydrogen—although it gives excellent specific impulse—is not space storable. Design work in the early 2010s has developed an approach to substantially reduce hydrogen boiloff, and to economically utilize the small remaining boiloff product for requisite in-space tasks, essentially achieving zero boil off (ZBO) from a practical point of view.:p. 3,4,7
Other substances could also be used. Water gives quite poor performance of 190 seconds (1.9 kN·s/kg), but requires only simple equipment to purify and handle, and is space storable and this has very seriously been proposed for interplanetary use,[by whom?] using in-situ resources.
Ammonia has been proposed as a propellant. It offers higher specific impulse than water, but is easily storable, with a freezing point of −77 degrees Celsius and a boiling point of −33.34 °C. The exhaust dissociates into hydrogen and nitrogen, leading to a lower average molecular weight, and thus a higher Isp (65% of hydrogen).
A solar-thermal propulsion architecture outperforms architectures involving electrolysis and liquification of hydrogen from water by more than an order of magnitude, since electrolysis requires heavy power generators, whereas distillation only requires a simple and compact heat source (either nuclear or solar); so the propellant production rate is correspondingly far higher for any given initial mass of equipment. However its use does rely on having clear ideas of the location of water ice in the solar system, particularly on lunar and asteroidal bodies, and such information is not known, other than that the bodies with the asteroid belt and further from the Sun are expected to be rich in water ice.
Solar-thermal for ground launch
Solar thermal rockets have been proposed [full citation needed] as a system for launching a small personal spacecraft into orbit. The design is based on a high altitude airship which uses its envelope to focus sunlight onto a tube. The propellant, which would likely be ammonia, is then fed through to produce thrust. Possible design flaws include whether the engine could produce enough thrust to overcome drag, and whether the skin of the airship wouldn't fail at hypersonic velocities. This has many similarities to the orbital airship proposed by JP Aerospace.
Proposed solar-thermal space systems
As of 2010[update], two proposals for utilizing solar-thermal propulsion on in-space post-launch spacecraft systems had been made.
A concept to provide low Earth orbit (LEO) propellant depots that could be used as way-stations for other spacecraft to stop and refuel on the way to beyond-LEO missions has proposed that waste gaseous hydrogen—an inevitable byproduct of long-term liquid hydrogen storage in the radiative heat environment of space—would be usable as a monopropellant in a solar-thermal propulsion system. The waste hydrogen would be productively utilized for both orbital stationkeeping and attitude control, as well as providing limited propellant and thrust to use for orbital maneuvers to better rendezvous with other spacecraft that would be inbound to receive fuel from the depot.
Solar-thermal monoprop hydrogen thrusters are also integral to the design of the next-generation cryogenic upper stage rocket proposed by U.S. company United Launch Alliance (ULA). The Advanced Common Evolved Stage (ACES) was intended as a lower-cost, more-capable and more-flexible upper stage that would supplement, and perhaps replace, the existing ULA Centaur and ULA Delta Cryogenic Second Stage (DCSS) upper stage vehicles. The ACES Integrated Vehicle Fluids option eliminates all hydrazine monopropellant and all helium pressurant from the space vehicle—normally used for attitude control and station keeping—and depends instead on solar-thermal monoprop thrusters using waste hydrogen.:p. 5[needs update]
- Solar Thermal Propulsion for Small Spacecraft - Engineering System Development and Evaluation PSI-SR-1228 publisher AIAA July 2005
- Webpage DLR Solar Thermal Propulsion of the Institut für Raumfahrtantriebe Abteilung Systemanalyse Raumtransport (SART) date = November 2006 Archived 2007-07-06 at the Wayback Machine
- John H. Schilling, Frank S. Gulczinski III. "Comparison of Orbit Transfer Vehicle Concepts Utilizing Mid-Term Power and Propulsion Options" (PDF). Retrieved May 23, 2018.
- Ultramet. "Advanced Propulsion Concepts - Solar Thermal Propulsion". Ultramet. Retrieved June 20, 2012.
Zegler, Frank; Bernard Kutter (2010-09-02). "Evolving to a Depot-Based Space Transportation Architecture" (PDF). AIAA SPACE 2010 Conference & Exposition. AIAA. p. 3. Retrieved March 31, 2017.
the waste hydrogen that has boiled off happens to be the best known propellant (as a monopropellant in a basic solar-thermal propulsion system) for this task. A practical depot must evolve hydrogen at a minimum rate that matches the station keeping demands.
- PSI. "Solar Thermal Propulsion for Small Spacecraft_Engineering System Development and Evaluation" (PDF). PSI. Retrieved August 12, 2017.
- Zuppero, Anthony (2005). "Propulsion to Moons of Jupiter Using Heat and Water Without Electrolysis Or Cryogenics" (PDF). Space Exploration 2005. SESI Conference Series. 001. Retrieved June 20, 2012.
- Zuppero, Anthony. "new fuel: Near Earth Object fuel (Neofuel, using abundant off-earth resources for interplanetary transport)". Retrieved June 20, 2012.
- NewMars, Solar Thermal Tech for Ground Launch? Archived 2012-02-20 at the Wayback Machine
- Byers, Woodcock (2003). "Results of Evaluation of Solar Thermal Propulsion, AIAA 2003-5029". AIAA. Cite journal requires
- Nick Stevens Graphics, 18 January 2018, accessed 20 January 2019.
- Rocket engine performance as a function of exhaust velocity and mass fraction for various spacecraft, Project Rho, accessed 20 January 2019.
- Solar Thermal Propulsion for Small Spacecraft - Engineering System Development and Evaluation (2005)
- Pratt & Whitney Rocketdyne Wins $2.2 Million Contract Option for Solar Thermal Propulsion Rocket Engine (Webpage quoting press release,
June 25, 2008, Pratt & Whitney Rocketdyne) | 0.801944 | 3.753423 |
Borisov is the second interstellar visitors in two years...I wonder if data from the first visitor (probe/ship) inspire a second visit?... NASA says a new comet is likely an 'interstellar visitor' from another star system — the second ever detected Astronomers may have spotted the second object ever to visit our solar system from another star system. The object may even fly near Mars in October. Right now, the chances are much higher that the object, known as comet "C/2019 Q4 (Borisov)" (or "g34"), is interstellar, rather than a rock from within the solar system. But scientists are not yet entirely certain. The first such interstellar object ever detected, the mysterious and cigar-shaped 'Oumuamua (which a few scientists controversially argued may be alien in origin), zoomed through our solar system in 2017. An amateur astronomer in Crimea, Gennady Borisov, first spotted C/2019 Q4 in the sky on August 30. It hasn't yet entered our solar system, but astronomers have been collecting data in hopes of plotting the object's path through space and figuring out where it came from. "It's so exciting, we're basically looking away from all of our other projects right now," Olivier Hainaut, an astronomer with the European Southern Observatory, told Business Insider. Hainaut was part of a global team of astronomers that studied 'Oumuamua as it passed through the solar system two years ago. "The main difference from 'Oumuamua and this one is that we got it a long, long time in advance, " he added. "Now astronomers are much more prepared." A telescope system at NASA's Jet Propulsion Laboratory, called Scout, automatically flagged C/2019 Q4 as a potential interstellar object. Though the comet's origin has not yet been confirmed, it's traveling at 93,000 miles per hour and is expected to cross our solar system's orbital plane on October 26. "The high velocity indicates not only that the object likely originated from outside our solar system, but also that it will leave and head back to interstellar space," Davide Farnocchia, who studies near-Earth objects at NASA, said in a press release. The object's core is between 1.2 and 10 miles (2 and 16 kilometers) in diameter. It's expected to pass through our solar system outside Mars' orbit and get no closer to Earth than 190 million miles (300 million kilometers). Early images suggest C/2019 Q4 is followed by a small tail or halo of dust. That's a distinct trait of comets — they hold ice that gets heated up by nearby stars, leading them to shoot out gas and grit into space. The dust could make C/2019 Q4 simpler to track than 'Oumuamua, since dust brightly reflects sunlight. That reflected light could also make it easier for scientists to study the object's composition, since telescope instruments can "taste" light to look for chemical signatures. "Here we have something that was born around another star and traveling toward us," Hainaut said. "It's the next best thing to sending a probe to a different solar system." Astronomers around the globe are grabbing every telescope available to plot C/2019 Q4's path through space. The goal: see whether the object has an orbit that's elliptical (oval-shaped and around the sun) or hyperbolic (checkmark-shaped, and on an open-ended trajectory). It seems much more likely that its path is hyperbolic, though astronomers say more observations are required to know for sure. In particular, they're trying to ascertain C/2019 Q4's eccentricity, or how extreme its orbit is. "The error indicates it's still possible that's within the solar system," Hainaut said. "But that error is decreasing as we get more and more data, and the eccentricity is looking interstellar." The object's seemingly high velocity and comet-like shroud of dust also tilt the scales toward interstellar, Hainaut added. This rough simulation shows C/2019 Q4's possible orbital path (green) through the solar system. It may pass between the orbits of Jupiter (purple) and Mars (orange) in late October. "It could be a few days or a few weeks before we have enough data to definitively say. But even with the very best data, we may need more," he said. "It's frustrating." When 'Oumuamua sped past Earth at a distance of just 15 million miles in October 2017, astronomers had no idea it was coming. "We had to scramble for telescope time," Hainaut said. "This time, we're ready." Astronomers will be able to study C/2019 Q4 for at least a year. "The object will peak in brightness in mid-December and continue to be observable with moderate-size telescopes until April 2020," Farnocchia said. "After that, it will only be observable with larger professional telescopes through October 2020." Hainaut and his colleagues have some smaller telescopes queued up for observations, but he said he'd like to use "everything" to observe C/2019 Q4. His team is trying to get time on the "big guys," including the Very Large Telescope in Chile, the Keck Observatory, and the Gemini telescope in Hawaii. He said at least one colleague and likely other astronomers are working on a proposal to have the Hubble Space Telescope take a look. Others are seeking to use NASA's two infrared space telescopes: Spitzer and the Wide-field Infrared Survey Explorer, or WISE. Many astronomers are excited about C/2019 Q4, but more work has to be done to confirm it's truly interstellar. "This is not the first object since 2017/1I, better known as 'Oumuamua, to show a hyperbolic orbit," Michele Bannister, a planetary astronomer at Queen's University Belfast, tweeted on Wednesday. Bannister noted that with such limited observations, an object could appear to have a rare interstellar orbit but later turn out to have an orbit within our solar system. "Sometimes, we just have to wait for the motion of the heavens. And make...more observations," she added. Currently, those observations aren't easy, Hainaut said. C/2019 Q4's position in the night sky places it close to the sun, giving astronomers a very limited window of time before dawn to study it. "It's hard to see, but we have the best guys doing astrometry, trying to measure its position in the sky," he said. "It could be a few days or a few weeks before we have enough data to definitively say." If C/2019 Q4 does turn out to be a second interstellar object, that would bode well for a mission Hainaut is proposing to send robotic probes into space to intercept future objects like this. "One of the main issues is: How many of these are there? If we detect one every century, it's hard to plan a mission to intercept one," he said. On the other hand, if these objects come every couple of years, astronomers might even be able to get choosy about which object to intercept. "This suggests we can afford to wait one or two or three years to get the right one, and maybe not the first one we spot after organizing a mission," Hainaut said. . | 0.916023 | 3.327473 |
In our never-ending quest for Vulcans, Hynerians and little green men, we’ve blasted off satellites to the far reaches of our galaxy to scope out evidence of extraterrestrials. Now research has suggested that these so-called aliens are most likely microbial — and dead.
Astronomers can see the logic behind all the hypothesizing about alien life. "The universe is probably filled with habitable planets, so many scientists think it should be teeming with aliens," acknowledges Dr. Aditya Chopra, professor at the Australian National University and co-author of "The Case for a Gaian Bottleneck: The Biology of Habitability," which was recently published in Astronomy. So why, then, are they making a case for most of that life being fossilized?
Exoplanets (planets that orbit other stars besides our sun) may have once been oases for a sort of primordial soup much like the one that spawned life on Earth. Unfortunately for anyone who dreams of being catapulted through space at warp 9 and beamed down to some advanced civilization light-years away, the cruel reality of space is that most habitable conditions don’t last. Surface temperature needs to be stable for billions of years if there is to be any hope of E.T. evolution. Without the precise balance of elements, greenhouse gases, water and albedo (the amount of light a planet’s surface can reflect), no life is able to thrive long enough to evolve, and the environments of most exoplanets are too volatile to maintain the magic ratio.
How did our planet do it? Earth was not too different from Venus and Mars during their first billion or so years orbiting the sun. This could mean that both now-uninhabitable zones could have possibly hosted ancient alien microbes. So why did Venus heat up into a swirling storm of poisonous gases and Mars freeze into a cosmic icebox?
Blame feedback loops -- i.e., when the output of a system on a planetary surface feeds back into that same system. Positive feedback ironically doesn’t bode well for potential life forms, because the loop keeps increasing instability in a planet’s motion, temperature and chemical composition as it feeds back into itself. Another culprit is thermal runaway. It’s a vicious cycle of increases or decreases in temperature that results from negative feedback loops, inevitably leads to temps further soaring or plummeting, and turns a once-habitable planet into a bacterial graveyard. Runaway heating and cooling means most exoplanets have their default setting on insta-burn or insta-freeze. Erratic conditions like this are the reason that life on many exoplanets, if there was ever life at all, has gone the way of the dinosaurs.
What makes Earth unique is that it was able to hold on to habitable conditions just long enough for emergent life to take hold. "Early life is fragile, so we believe it rarely evolves quickly enough to survive," says Chopra. Negative feedback loops, which keep reducing inconsistencies as they feed back into themselves, stabilized conditions on the nascent planet. Microbes were able to keep breeding until the surface was covered in bacterial mats much like those that spring up around heat vents on the ocean floor (the other final frontier). The metabolic reactions of these microbes were able to regulate greenhouse gas production, which in turn stabilized the atmosphere and kept their writhing masses alive. At some point those writhing masses became us.
Does this mean that we will only ever encounter alien life in galactic graveyards? Not necessarily. Satellites continue to probe an uncharted universe scattered with billions and billions of Carl Sagan’s pale blue dots. Planets that can maintain life are rare, but if one exists, there will always be believers. Any one of those pale blue dots could be pulsing with life.
(Source: Science Daily) | 0.893823 | 3.577894 |
There’s no need to worry about Planet X destroying Earth in October … we may not make it past February. NASA has spotted not one but two big space objects heading towards Earth, with one coming close enough to see with binoculars. Is that close enough to be afraid?
(This comet) has a good chance of becoming visible through a good pair of binoculars, although we can’t be sure because a comet’s brightness is notoriously unpredictable.
That not-too-comforting comment comes from Paul Chodas, manager of NASA’s Center for Near-Earth Object (NEO) Studies at the Jet Propulsion Laboratory in Pasadena, California. The NEO he’s describing is comet C/2016 U1, which was discovered by the NEOWISE asteroid-hunting project in October 2016 and is in the southeastern sky over the northern hemisphere just before dawn this week – nothing like a little advanced warning, NASA! While it looks big and close, C/2016 U1 doesn’t get any closer than 66 million miles (106 million km) from Earth before it swings around the Sun inside the orbit of Mercury and heads back out into space for another few thousands years. Maybe by the time it gets back, NASA will have the technology to figure out how big it actually is.
Assuming we survive C/2016 U1, we have about six weeks to get ready for the more ominous 2016 WF9. Discovered by NEOWISE on November 27, 2016, this space ball is big and spooky. It measures 0.3 to 0.6 miles (0.5 to 1 km) in diameter and is made up of a dark matter (not THAT dark matter) that reflects very little light. This dark surface has NASA researchers arguing over whether 2016 WF9 is a comet or an asteroid. While its orbit says “comet,” its lack of a dust and gas cloud says “asteroid.”
Meanwhile, Earth says “Who cares what it is … when is it coming and how close is it going to get?” Estimated date of arrival for 2016 WF9 is February 25, 2017, when it will enter inside Earth’s orbit at a distance of nearly 32 million miles (51 million km) away. While that’s much closer than C/2016 U1, its dark surface means it can’t be seen with binoculars. That distance also means it’s not putting the planet in any danger … for now.
However, it’s close enough that astronomers may be able to solve the mystery of whether it’s a comet or an asteroid … or something else, according to Deputy Principal Investigator James Bauer at JPL.
2016 WF9 could have cometary origins. This object illustrates that the boundary between asteroids and comets is a blurry one; perhaps over time this object has lost the majority of the volatiles that linger on or just under its surface.
Two more space balls – one bright, one big, dark and mysterious – pass close by Earth. Is a few weeks notice enough of a warning? | 0.819985 | 3.317852 |
Astronomers are hailing the photograph of a dust-cloaked star as one of the most dramatic pictures ever taken. The image from the Hubble Space Telescope shows the dying star 4,000 light years away being bombarded by hailstones.
However, scientists are baffled by the vast butterflylike "wings" around the star.
These are dust clouds produced as the star dies. But scientists cannot work out how the so-called Butterfly Nebula formed its distinctive shape, as the star at its heart is actually round. Lars Christensen of the European Space Agency, which operates the Hubble Space Telescope jointly with Nasa, said: "It's a big mystery to us all - how a round star like our own sun can create this effect, which is so symmetrical. It's amazing."
The image reveals huge walls of compressed gas and bubbling outflows. A massive ring of dark dust hiding the star also has experts baffled, reports thisislondon.co.uk
According to innovations-report.com the Bug Nebula, NGC 6302, is one of the brightest and most extreme planetary nebulae known. At its centre lies a superhot dying star smothered in a blanket of ‘hailstones’. A new Hubble image reveals fresh detail in the wings of this ‘cosmic butterfly’.
This image of the Bug Nebula, taken with the NASA/ESA Hubble Space Telescope (HST), shows impressive walls of compressed gas. A torus (‘doughnut’) shaped mass of dust surrounds the inner nebula (seen at the upper right).
At the heart of the turmoil is one of the hottest stars known. Despite an extremely high temperature of at least 250 000 degrees Celsius, the star itself has never been seen, as it shines most brightly in the ultraviolet and is hidden by the blanket of dust, making it hard to observe.
Chemically, the composition of the Bug Nebula also makes it one of the more interesting objects known. Earlier observations with the European Space Agency’s Infrared Space Observatory (ISO) have shown that the dusty torus contains hydrocarbons, carbonates such as calcite, as well as water ice and iron. The presence of carbonates is interesting. In the Solar System, their presence is taken as evidence for liquid water in the past, because carbonates form when carbon dioxide dissolves in liquid water and forms sediments. But its detection in nebulae such as the Bug Nebula, where no liquid water has existed, shows that other formation processes cannot be excluded.
But what excites astronomers most is not the shimmer of the wings but a dark band that bisects them. A dense ring of gas and dust - called a torus - girdles and obscures the dying star and contains most of the star's ejected gas.
"We really don't know what causes the material to be ejected primarily in one plane," says Albert Zijlstra, an astronomer at the University of Manchester Institute of Science and Technology, UK. His analysis of the Hubble image and others will appear in Astronomy & Astrophysics.
The Bug Nebula is one of about 1600 known planetary nebula. These form when stars up to eight times the mass of the Sun begin to die. They bloat into red giants before shedding as much as half their mass as gas and dust nebulae, which often take on the pinched appearance of the Bug Nebula, inform newscientist.com
What subcategory of human being takes a knee on a handcuffed man, mashed face down on the pavement and, ultimately, forces him to die? | 0.874932 | 3.738478 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.