26. Skyglow

Skyglow is the diffuse illumination of the night sky by light sources other than large astronomical objects. Sometimes this is considered to include diffuse natural sources such as the zodiacal light (discussed in a previous proof), or the faint glow of the atmosphere itself caused by incoming cosmic radiation (called airglow), but primarily skyglow is considered to be the product of artificial lighting caused by human activity. In this context, skyglow is essentially the form of light pollution which causes the night sky to appear brighter near large sources of artificial light (i.e. cities and towns), drowning out natural night sky sources such as fainter stars.

Skyglow from Keys View

Skyglow from the cities of the Coachella Valley in California, as seen from Keys View lookout, Joshua Tree National Park, approximately 20 km away. (Public domain image by U.S. National Park Service/Lian Law, from Flickr.)

The sky above a city appears to glow due to the scattering of light off gas molecules and aerosols (i.e. dust particles, and suspended liquid droplets in the air). Scattering of light from air molecules (primarily nitrogen and oxygen) is called Rayleigh scattering. This is the same mechanism that causes the daytime sky to appear blue, due to scattering of sunlight. Although blue light is scattered more strongly, the overall colour effect is different for relatively nearby light sources than it is for sunlight. Much of the blue light is also scattered away from our line of sight, so skyglow caused by Rayleigh scattering ends up a similar colour to the light sources. Scattering off aerosol particles is called Mie scattering, and is much less dependent on wavelength, so also has little effect on the colour of the scattered light.

Skyglow from Cholla

Skyglow from the cities of the Coachella Valley in California, as seen from Cholla Cactus Garden, Joshua Tree National Park, approximately 40 km away. (Public domain image by U.S. National Park Service/Hannah Schwalbe, from Flickr.)

Despite the relative independence of scattered light on wavelength, bluer light sources result in a brighter skyglow as perceived by humans. This is due to a psychophysical effect of our optical systems known as the Purkinje effect. At low light levels, the rod cells in our retinas provide most of the sensory information, rather than the colour-sensitive cone cells. Rod cells are more sensitive to blue-green light than they are to redder light. This means that at low light levels, we are relatively more sensitive to blue light (compared to red light) than we are at high light levels. Hence skyglow caused by blue lights appears brighter than skyglow caused by red lights of similar perceptual brightness.

Artificially produced skyglow appears prominently in the sky above cities. It makes the whole night sky as seen from within the city brighter, making it difficult or impossible to see fainter stars. At its worst, skyglow within a city can drown out virtually all night time astronomical objects other than the moon, Venus, and Jupiter. The skyglow from a city can also be seen from dark places up to hundreds of kilometres away, as a dome of bright sky above the location of the city on the horizon.

Skyglow from Ashurst Lake

Skyglow from the cities of Phoenix and Flagstaff, as seen from Ashurst Lake, Arizona, rendered in false colour. Although the skyglow from each city is visible, the cities themselves are below the horizon and not visible directly. The arc of light reaching up into the sky is the Milky Way. (Public domain image by the U.S. National Park Service, from Wikipedia.)

However, although the skyglow from a city can be seen from such a distance, the much brighter lights of the city itself cannot be seen directly – because they are below the horizon. The fact that you can observe the fainter glow of the sky above a city while not being able to see the lights of the city directly is because of the curvature of the Earth.

This is not the only effect of Earth’s curvature on the appearance of skyglow; it also effects the brightness of the glow. In the absence of any scattering or absorption, the intensity of light falls off with distance from the source following an inverse square law. Physically, this is because the surface area of spherical shells of increasing radius from a light source increase as the square of the radius. So the same light flux has to “spread out” to cover an area equal to the square of the distance, thus by the conservation of energy its brightness at any point is proportional to one divided by the square of the distance. (The same argument applies to many phenomena whose strengths vary with distance, and is why inverse square laws are so common in physics.)

Skyglow, however, is also affected by scattering and absorption in the atmosphere. The result is that the brightness falls off more rapidly with distance from the light source. In 1977, Merle F. Walker of Lick Observatory in California published a study of the sky brightness caused by skyglow at varying distances from several southern Californian cities[1]. He found an empirical relationship that the intensity of skyglow varies as the inverse of distance to the power of 2.5.

Skyglow intensity versus distance from Salinas

Plot of skyglow intensity versus distance from Salinas, California. V is the “visual” light band and B the blue band of the UBV photometric system, which are bands of about 90 nanometres width centred around wavelengths of 540 and 442 nm respectively. The fitted line corresponds to intensity ∝ (distance)-2.5. (Figure reproduced from [1].)

This relationship, known as Walker’s law, has been confirmed by later studies, with one notable addition. It only holds for distances up to 50-100 kilometres from the city. When you travel further away from a city, the intensity of the skyglow starts to fall off more rapidly than Walker’s law suggests, a little bit faster at first, but then more and more rapidly. This is because as well as the absorption effect, the scattered light path is getting longer and more complex due to the curvature of the Earth.

A later study by prominent astronomical light pollution researcher Roy Henry Garstang published in 1989 examined data from multiple cities in Colorado, California, and Ontario to produce a more detailed model of the intensity of skyglow[2]. The model was then tested and verified for multiple astronomical sites in the mainland USA, Hawaii, Canada, Australia, France, and Chile. Importantly for our perspective, the model Garstang came up with requires the Earth’s surface to be curved.

Skyglow intensity model geometry

Geometrical diagrams for calculating intensity of skyglow caused by cities, from Garstang. The observer is located at O, atop a mountain A. Light from a city C travels upward along the path s until it is scattered into the observer’s field of view at point Q. The centre of the spherical Earth is at S, off the bottom of the figure. (Figure reproduced from [2].)

Interestingly, Garstang also calculated a model for the intensity of skyglow if you assume the Earth is flat. He did this because it greatly simplifies the geometry and the resulting algebra, to see if it produced results that were good enough. However, quoting directly from the paper:

In general, flat-Earth models are satisfactory for small city distances and observations at small zenith distances. As a rough rule of thumb we can say that for calculations of night-sky brightnesses not too far from the zenith the curvature of the Earth is unimportant for distances of the observer of up to 50 km from a city, at which distance the effect of curvature is typically 2%. For larger distances the curved-Earth model should always be used, and the curved-Earth model should be used at smaller distances when calculating for large zenith distances. In general we would use the curved-Earth model for all cases except for city-center calculations. […] As would be expected, we find that the inclusion of the curvature of the Earth causes the brightness of large, distant cities to fall off more rapidly with distance than for a flat-Earth model.

In other words, to get acceptably accurate results for either distances over 50 km or for large zenith angles at any distance, you need to use the spherical Earth model – because assuming the Earth is flat gives you a significantly wrong answer.

This result is confirmed experimentally again in a 2007 paper[3], as shown in the following diagram:

Skyglow intensity versus distance from Salinas from Las Vegas

Plot of skyglow intensity versus distance from Las Vegas as observed at various dark sky locations in Nevada, Arizona, and California. The dashed line is Walker’s Law, with an inverse power relationship of 2.5. Skyglow at Rogers Peak, more than 100 km away, is less than predicted by Walker’s Law, “due to the Earth’s curvature complicating the light path” (quoted from the paper). (Figure reproduced from [3].)

So astronomers, who are justifiably concerned with knowing exactly how much light pollution from our cities they need to contend with at their observing sites, calculate the intensity of skyglow using a model that is significantly more accurate if you include the curvature of the Earth. Using a flat Earth model, which might otherwise be preferred for simplicity, simply isn’t good enough – because it doesn’t model reality as well as a spherical Earth.

References:

[1] Walker, M. F. “The effects of urban lighting on the brightness of the night sky”. Publications of the Astronomical Society of the Pacific, 89, p. 405-409, 1977. https://doi.org/10.1086/130142

[2] Garstang, R. H. “Night sky brightness at observatories and sites”. Publications of the Astronomical Society of the Pacific, 101, p. 306-329, 1989. https://doi.org/10.1086/132436

[3] Duriscoe, Dan M., Luginbuhl, Christian B., Moore, Chadwick A. “Measuring Night-Sky Brightness with a Wide-Field CCD Camera”. Publications of the Astronomical Society of the Pacific, 119, p. 192-213, 2007. https://dx.doi.org/10.1086/512069

25. Planetary formation

Why does Earth exist at all?

The best scientific model we have for understanding how the Earth exists begins with the Big Bang, the event that created space and time as we know and understand it, around 14 billion years ago. Scientists are interested in the questions of what possibly happened before the Big Bang and what caused the Big Bang to happen, but haven’t yet converged on any single best model for those. However, the Big Bang itself is well established by multiple independent lines of evidence and fairly uncontroversial.

The very early universe was a hot, dense place. Less than a second after the Big Bang, it was essentially a soup of primordial matter and energy. The energy density was so high that the equivalence of mass and energy (discovered by Albert Einstein) allowed energy to convert into particle/antiparticle pairs and vice versa. The earliest particles we know of were quarks, electrons, positrons, and neutrinos. The high energy density also pushed space apart, causing it to expand rapidly. As space expanded, the energy density reduced. The particles and antiparticles annihilated, converting back to energy, and this process left behind a relatively small residue of particles.

Diagram of the Big Bang

Schematic diagram of the evolution of the universe following the Big Bang. (Public domain image by NASA.)

After about one millionth of a second, the quarks no longer had enough energy to stay separated, and bound together to form the protons and neutrons more familiar to us. The universe was now a plasma of charged particles, interacting strongly with the energy in the form of photons.

After a few minutes, the strong nuclear force could compete with the ambient energy level, and free neutrons bonded together with protons to form a few different types of atomic nuclei, in a process known as nucleosynthesis. A single proton and neutron could pair up to form a deuterium nucleus (an isotope of hydrogen, also known as hydrogen-2). More rarely, two protons and a neutron could combine to make a helium-3 nucleus. More rarely still, three protons and four neutrons occasionally joined to form a lithium-7 nucleus. Importantly, if two deuterium nuclei collided, they could stick together to form a helium-4 nucleus, the most common isotope of helium. The helium-4 nucleus (or alpha particle as it is also known in nuclear physics) is very stable, so the longer this process went on, the more helium nuclei were formed and the more depleted the supply of deuterium became. Ever since the Big Bang, natural processes have destroyed more of the deuterium, but created only insignificant additional amounts – which means that virtually all of the deuterium now in existence was created during the immediate aftermath of the Big Bang. This is important because measuring the abundance of deuterium in our universe now gives us valuable evidence on how long this phase of Big Bang nucleosynthesis lasted. Furthermore, measuring the relative abundances of helium-3 and lithium-7 also give us other constraints on the physics of the Big Bang. This is one method we have of knowing what the physical conditions during the very early universe must have been like.

Nuclei formed during the Big Bang

Diagrams of the nuclei (and subsequent atoms) formed during Big Bang nucleosynthesis.

The numbers all point to this nucleosynthesis phase lasting approximately 380,000 years. All the neutrons had been bound into nuclei, but the vast majority of protons were left bare. At this time, something very important happened. The energy level had lowered enough for the electrostatic attraction of protons and electrons to form the first atoms. Prior to this, any atoms formed would quickly be ionised again by the surrounding energy. The bare protons attracted an electron each and become atoms of hydrogen. The deuterium nuclei also captured an electron to become atoms of deuterium. The helium-3 and helium-4 nuclei captured two electrons each, while the lithium nuclei attracted three. There were two other types of atoms almost certainly formed which I haven’t mentioned yet: hydrogen-3 (or tritium) and beryllium-7 – however both of these are radioactive and have short half-lives (12 years for tritium; 53 days for beryllium-7), so within a few hundred years there would be virtually none of either left. And that was it – the universe had its initial supply of atoms. There were no other elements yet.

When the electrically charged electrons became attached to the charged nuclei, the electric charges cancelled out, and the universe changed from a charged plasma to an electrically neutral gas. This made a huge difference, because photons interact strongly with electrically charged particles, but much less so with neutral ones. Suddenly, the universe went from opaque to largely transparent, and light could propagate through space. When we look deep into space with our telescopes, we look back in time because of the finite speed of light (light arriving at Earth from a billion light years away left its source a billion years ago). This is the earliest possible time we can see. The temperature of the universe at this time was close to 3000 kelvins, and the radiation had a profile equal to that of a red-hot object at that temperature. Over the billions of years since, as space expanded, the radiation became stretched to longer wavelengths, until today it resembles the radiation seen from an object at temperature around 2.7 K. This is the cosmic microwave background radiation that we can observe in every direction in space – it is literally the glow of the Big Bang, and one of the strongest observational pieces of evidence that the Big Bang happened as described above.

Cosmic microwave background

Map of the cosmic microwave background radiation over the full sky, as observed by NASA’s WMAP satellite. The temperature of the radiation is around 2.7 K, while the fluctuations shown are ±0.0002 K. The radiation is thus extremely smooth, but does contain measurable fluctuations, which lead to the formation of structure in the universe. (Public domain image by NASA.)

The early universe was not uniform. The density of matter was a little higher in places, a little lower in other places. Gravity could now get to work. Where the matter was denser, gravity was higher, and these areas began attracting matter from the less dense regions. Over time, this formed larger and larger structures, the size of stars and planetary systems, galaxies, and clusters of galaxies. This part of the process is one where a lot of the details still need to be worked out – we know more about the earlier stages of the universe. At any rate, at some point clumps of gas roughly the size of planetary systems coalesced and the gas at the centre accreted under gravity until it became so massive that the pressure at the core initiated nuclear fusion. The clumps of gas became the first stars.

The Hubble Extreme Deep Field

The Hubble Extreme Deep Field. In this image, except for the three stars with visible 8-pointed starburst patterns, every dot of light is a galaxy. Some of the galaxies in this image are 13.2 billion years old, dating from just 500 million years after the Big Bang. (Public domain image by NASA.)

The first stars had no planets. There was nothing to make planets out of; the only elements in existence were hydrogen with a tiny bit of helium and lithium. But the nuclear fusion process that powered the stars created more elements: carbon, oxygen, nitrogen, silicon, sodium, all the way up to iron. After a few million years, the biggest stars had burnt through as much nuclear fuel in their cores as they could. Unable to sustain the nuclear reactions keeping them stable, they collapsed and exploded as supernovae, spraying the elements they produced back into the cosmos. The explosions also generated heavier elements: copper, gold, lead, uranium. All these things were created by the first stars.

Supernova 2012Z

Supernova 2012Z, in the spiral galaxy NGC 1309, position shown by the crosshairs, and detail before and during the explosion. (Creative Commons Attribution 4.0 International image by ESA/Hubble, from Wikimedia Commons.)

The interstellar gas cloud was now enriched with heavy elements, but still by far mostly hydrogen. The stellar collapse process continued, but now as a star formed, there were heavy elements whirling in orbit around it. The conservation of angular momentum meant that elements spiralled slowly into the proto-star at the centre of the cloud, forming an accretion disc. Now slightly denser regions of the disc itself began attracting more matter due to their stronger gravity. Matter began piling up, and the heavier elements like carbon, silicon, and iron formed the first solid objects. Over a few million years, as the proto-star in the centre slowly absorbed more gas, the lumps of matter in orbit—now large enough to be called dust, or rocks—collided together and grew, becoming metres across, then kilometres, then hundreds of kilometres. At this size, gravity ensured the growing balls of rock were roughly spherical, due to hydrostatic equilibrium (previously discussed in a separate article). They attracted not only solid elements, but also gases like oxygen and hydrogen, which wrapped the growing protoplanets in atmospheres.

Protoplanetary disc of HL Tauri

Protoplanetary disc of the very young star HL Tauri, imaged by the Atacama Large Millimetre Array. The gaps in the disc are likely regions where protoplanets are accreting matter. (Creative Commons Attribution 4.0 International image by ALMA (ESO/NAOJ/NRAO), from Wikimedia Commons.)

Eventually the star at the centre of this protoplanetary system ignited. The sudden burst of radiation pressure from the star blew away much of the remaining gas from the local neighbourhood, leaving behind only that which had been gravitationally bound to what were now planets. The closest planets had most of the gas blown away, but beyond a certain distance it was cold enough for much of the gas to remain. This is why the four innermost planets of our own solar system are small rocky worlds with thin or no atmospheres with virtually no hydrogen, while the four outermost planets are larger and have vast, dense atmospheres mainly of hydrogen and hydrogen compounds.

But the violence was not over yet. There were still a lot of chunks of orbiting rock and dust besides the planets. These continued to collide and reorganise, some becoming moons of the planets, others becoming independent asteroids circling the young sun. Collisions created craters on bigger worlds, and shattered some smaller ones to pieces.

Mimas

Saturn’s moon Mimas, imaged by NASA’s Cassini probe, showing a huge impact crater from a collision that would nearly have destroyed the moon. (Public domain image by NASA.)

Miranda

Uranus’s moon Miranda, imaged by NASA’s Voyager 2 probe, showing disjointed terrain that may indicate a major collision event that shattered the moon, but was not energetic enough to scatter the pieces, allowing them to reform. (Public domain image by NASA.)

The left over pieces of the creation of the solar system still collide with Earth to this day, producing meteors that can be seen in the night sky, and sometimes during daylight. (See also the previous article on meteor arrival rates.)

The process of planetary formation, all the way from the Big Bang, is relatively well understood, and our current theories are successful in explaining the features of our solar system and those we have observed around other stars. There are details to this story where we are still working out exactly how or when things happened, but the overall sequence is well established and fits with our observations of what solar systems are like. (There are several known extrasolar planetary systems with large gas giant planets close to their suns. This is a product of observational bias—our detection methods are most sensitive to massive planets close to their stars—and such planets can drift closer to their stars over time after formation.)

One major consequence of this sequence of events is that planets form as spherical objects (or almost-spherical ellipsoids). There is no known mechanism for the formation of a flat planet, and even if one did somehow form it would be unstable and collapse into a sphere.

24. Gravitational acceleration variation

When you drop an object, it falls down. Initially the speed at which it falls is zero, and this speed increases over time as the object falls faster and faster. In other words, objects falling under the influence of gravity are accelerating. It turns out that the rate of acceleration is a constant when the effects of air resistance are negligible. Eventually air resistance provides a balancing force and the speed of fall reaches a limit, known as the terminal velocity.

Ignoring the air resistance part, the constant acceleration caused by gravity on the Earth’s surface is largely the same everywhere on Earth. This is why you feel like you weigh the same amount no matter where you travel (excluding travel into space!). However, there are small but measurable differences in the Earth’s gravity at different locations.

It’s straightforward to measure the strength of the acceleration due to gravity at any point on Earth with a gravity meter. We’ve already met one type of gravity meter during Airy’s coal pit experiment: a pendulum. So the measurements can be made with Georgian era technology. Nowadays, the most accurate measurements of Earth’s gravity are made from space using satellites. NASA’s GRACE satellite, launched in 2002, gave us our best look yet at the details of Earth’s gravitational field.

Being roughly a sphere of roughly uniform density, you’d expect the gravity at the Earth’s surface to be roughly the same everywhere and—roughly speaking—it is. But going one level of detail deeper, we know the Earth is closer to ellipsoidal than spherical, with a bulge around the equator and flattening at the poles. The surface gravity of an ellipsoid requires some nifty triple integrals to calculate, and fortunately someone on Stack Exchange has done the work for us[1].

Given the radii of the Earth, and an average density of 5520 kg/m3, the responder calculates that the acceleration due to gravity at the poles should be 9.8354 m/s2, while the acceleration at the equator should be 9.8289 m/s2. The difference is about 0.07%.

So at this point let’s look at what the Earth’s gravitational field does look like. The following figure shows the strength of gravity at the surface according to the Earth Gravitational Model 2008 (EGM2008), using data from the GRACE satellite.

Earth Gravitational Model 2008

Earth’s surface gravity as measured by NASA’s GRACE and published in the Earth Gravitational Model 2008. (Figure produced by Curtin University’s Western Australian Geodesy Group, using data from [2].)

We can see that the overall characteristic of the surface gravity is that it is minimal at the equator, around 9.78 m/s2, and maximal at the poles, around 9.83 m/s2, with a transition in between. Overlaid on this there are smaller details caused by the continental landmasses. We can see that mountainous areas such as the Andes and Himalayas have lower gravity – because they are further away from the centre of the planet. Now, the numerical value at the poles is a pretty good match for the theoretical value on an ellipsoid, close to 9.835 m/s2. But the equatorial figure isn’t nearly as good a match; the difference between the equator and poles is around 0.6%, not the 0.07% calculated for an ellipsoid of the Earth’s shape.

The extra 0.5% difference comes about because of another effect that I haven’t mentioned yet: the Earth is rotating. The rotational speed at the equator generates a centrifugal pseudo-force that slightly counteracts gravity. This is easy to calculate; it equals the radius times the square of the angular velocity of the surface at the equator, which comes to 0.034 m/s2. Subtracting this from our theoretical equatorial value gives 9.794 m/s2. This is not quite as low as 9.78 seen in the figure, but it’s much closer. I presume that the differences are caused by the assumed average density of Earth used in the original calculation being a tiny bit too high. If we reduce the average density to 5516 kg/m3 (which is still the same as 5520 to three significant figures, so is plausible), our gravities at the poles and equator become 9.828 and 9.788, which together make a better match to the large scale trends in the figure (ignoring the small fluctuations due to landmasses).

Of course the structure and shape of the Earth are not quite as simple as that of a uniformly dense perfect ellipsoid, so there are some residual differences. But still, this is a remarkably consistent outcome. One final point to note: it took me some time to track down the figure above showing the full value of the Earth’s gravitational field at the surface. When you search for this, most of the maps you find look like the following:

Earth Gravitational Model 2008 residuals

Earth surface gravity residuals, from NASA’s GRACE satellite data. The units are milligals; 1 milligal is equal to 0.00001 m/s2. (Public domain image by NASA, from [3].)

These seem to show that gravity is extremely lumpy across the Earth’s surface, but this is just showing the smaller residual differences after subtracting off a smooth gravity model that includes the relatively large polar/equatorial difference. Given the units of milligals, the largest differences between the red and blue areas shown in this map are only different by a little over 0.001 m/s2 after subtracting the smooth model.

We’re not done yet, because besides Earth we also have detailed gravity mapping for another planet: Mars!

Mars Gravitational Model 2011

Surface gravity strength on Mars. The overall trend is for lowest gravity at the equator, increasing with latitude to highest values at the poles, just like Earth. (Figure reproduced from [4].)

This map shows that the surface gravity on Mars has the same overall shape as that of Earth: highest at the poles and lowest at the equator, as we’d expect for a rotating ellipsoidal planet. Also notice that Mars’s gravity is only around 3.7 m/s2, less than half that of Earth.

Mars’s geography is in some sense much more dramatic than that of Earth, and we can see the smaller scale anomalies caused by the Hellas Basin (large red circle at lower right, which is the lowest point on Mars, hence the higher gravity), Olympus Mons (leftmost blue dot, northern hemisphere, Mars’s highest mountain), and the chain of three volcanoes on the Tharsis Plateau (straddling the equator at left). But overall, the polar/equatorial structure matches that of Earth.

Of course this all makes sense because the Earth is approximately an ellipsoid, differing from a sphere by a small amount of equatorial bulge caused by rotation, as is the case with Mars and other planets. We can easily see that Mars and the other planets are almost spherical globes, by looking at them with a telescope. If the structure of Earth’s gravity is similar to those, it makes sense that the Earth is a globe too. If the Earth were flat, on the other hand, this would be a remarkable coincidence, with no readily apparent explanation for why gravity should be stronger at the poles (remembering that the “south pole” in most flat Earth models is the rim of a disc) and weaker at the equator (half way to the rim of the disc), other than simply saying “that’s just the way Earth’s gravity is.”

References:

[1] “Distribution of Gravitational Force on a non-rotating oblate spheroid”. Stack Exchange: Physics, https://physics.stackexchange.com/questions/144914/distribution-of-gravitational-force-on-a-non-rotating-oblate-spheroid (Accessed 2019-09-06.)

[2] Pavlis, N. K., Holmes, S. A., Kenyon, S. C. , Factor, J. K. “The development and evaluation of the Earth Gravitational Model 2008 (EGM2008)”. Journal of Geophysical Research, 117, p. B04406. https://doi.org/10.1029/2011JB008916

[3] Space Images, Jet Propulsion Laboratory. https://www.jpl.nasa.gov/spaceimages/index.php?search=GRACE&category=Earth (Accessed 2019-09-06.)

[4] Hirt, C., Claessens, S. J., Kuhn, M., Featherstone, W.E. “Kilometer-resolution gravity field of Mars: MGM2011”. Planetary and Space Science, 67(1), p.147-154, 2012. https://doi.org/10.1016/j.pss.2012.02.006

Pendulum experiment

With my Science Club class of 7-10 year olds, I did an experiment to test what factors influence the period of swing of a pendulum, and to measure the strength of Earth’s gravity. I borrowed some brass weights and a retort stand from my old university Physics Department and took them to the school. Then with the children we did the experiment!

We set up pendulums with different lengths of string, measuring the length of each one. With each pendulum length, we tested different numbers of brass weights, and pulling the weight back by a different distance so that the pendulum swung through shorter or longer arcs. For each combination of string length, mass, and swing length, I got the kids to time a total of 10 back and forth swings with a stopwatch. I recorded the times and divided by 10 to get an average swing time for each pendulum.

Here’s a graph showing the pendulum period (or “swing time” as I’m calling it with the kids), plotted against the mass of the weight at the end.

Pendulum period versus mass

Pendulum period versus mass. The different colours indicate different pendulum lengths.

Here’s a graph showing the pendulum period (or “swing time” as I’m calling it with the kids), plotted against the swing distance (i.e. the amplitude).

Pendulum period versus swing distance

Pendulum period versus swing distance. The different colours indicate different pendulum lengths.

These first two graphs show pretty clearly that the period of the pendulum is not dictated by either the mass of the pendulum or the amplitude of the swing. If you look at the different colours showing the pendulum length, however, you may discern a pattern.

And here’s a graph showing the pendulum period plotted against the length of the pendulum.

Pendulum period versus length

Pendulum period versus length. The line is a power law fit to all the points.

In this case, all the points from different pendulum masses and swing amplitudes but the same length are clustered together (with some scatter caused by experimental errors in using the stopwatch). This indicates that only the length is important in determining the period. This matches the first order approximation theoretical formula for the period of a pendulum, T:

T = 2π√(l/g),

where l is the length and g is the acceleration due to gravity. To calculate g from the experimental data, I squared the period numbers and calculated the slope of the best fit line passing through zero to (iT2). Then g = 4π2 divided by the slope… which gives g = 10.0 m/s2.

The true value is 9.81 m/s2, so we got it right to a little better than 2%. Which I consider pretty good given the fact that I had kids as young as 7 making the measurements!

Although this is an “other science” entry on this blog and not a proof of the Earth’s roundness, I’m planning to combine the results of this experiment with our ongoing measurement of the sun’s shadow length of a vertical stick at the end of the year, to calculate not only the size of the Earth, but also its mass. It’ll be interesting to see how close we can get to that!

23. Straight line travel

Travel in a straight line across the surface of the Earth in any direction. After approximately 40,000 kilometres, you will find you are back where you started, having arrived from the opposite direction. While this sort of thing might be common in the wrap-around maps of some 1980s era video games, the simplest explanation for this in the real world is that the Earth is a globe, with a circumference about 40,000 km.

It’s difficult to see how this sort of thing could be possible on a flat Earth, unless the flat Earth’s surface were subject to some rather extreme directional and distance warping—that exactly mimics the behaviour of the surface of a sphere in Euclidean space. While this is not a priori impossible, it would certainly be an unlikely coincidence. Occam’s razor suggests that if it looks like a duck, quacks like a duck, and perfectly mimics the Euclidean geometry of a duck, it’s a duck.

This could be a very short and sweet entry if I left things there, but there are a few dangling questions.

Firstly there’s the question of exactly what we mean by a “straight line”. The Earth’s surface is curved, so any line we draw on it is necessarily curved in the third dimension, although this curvature is slight at scales we can easily perceive. The common understanding of a “straight line” on the Earth’s surface is the line giving the shortest distance joining two points as measured along the surface. This is what we mean when we talk about “straight lines” on Earth in casual speech, and it also matches how we’re using the term here.

In three dimensions, such “straight lines” are what we call great circles. A great circle is a circle on the surface of a sphere that has the same diameter as the sphere itself. On an idealised perfectly spherical Earth, the equator is a great circle, as are all of the meridians (i.e. lines of longitude). Lines of latitude other than the equator are not great circles: if you start north of the equator and travel due west, maintaining a westerly heading, then you are actually curving to the right. It’s easiest to see this by imagining a starting point very close to the North Pole. If you travel due west you will travel in a small clockwise circle around the pole.

Great circles

Great circles on a sphere. The horizontal circle is an equator, the vertical circle is a meridian, the red circle is an arbitrary great circle at some other angle.

Secondly, how can we know that we are travelling in such a straight line? The MythBusters once tested the myth that “It is impossible for a blindfolded person to travel in a straight line” and found that with restricted vision they were unable to either walk, swim, or drive in a straight line over even a very short distance[1]. We don’t need to keep our eyes closed though!

When travelling through unknown terrain, you can navigate by using the positions of the sun and stars as a reference frame, giving you a way of determining compass directions. Converting this into a great circle path however requires geometric calculations that depend on the spherical geometry of the Earth, so this approach is a somewhat circular argument if our aim is to demonstrate that the Earth is spherical.

A more direct method to ensure straight line travel is to line up two landmarks in the direction you are travelling, then when you reach the first one, line up another beyond the next one and repeat the process. This procedure can keep your course reasonably straight, but relies on visible and static landmarks, which may not be conveniently present. And this method is useless at sea.

Modern navigation now uses GPS to establish a position accurate to within a few metres. While this could be (and is routinely) used to plot a straight line course, again this relies on geometrical calculations that assume the Earth is spherical. (It works, of course, because the Earth is spherical, but render this particular line of argument against a flat Earth circular.)

Before GPS became commonplace, there was a different sort of navigation system in common use, and it is still used today as a backup for times when GPS is unavailable for any reason. These older systems are called inertial navigation systems (INS). They use components that provide an inertial frame of reference—that is, a reference frame that is not rotating or accelerating—independent of any motion of the Earth. These systems can be used for dead reckoning, which is navigating by plotting your direction and speed from your starting location to determine where you are at any time. They can be used to ensure that you follow a straight line path across the Earth, with reference to the inertial frame.

Inertial navigation systems can be built using several different physical principles, including mechanical gyroscopes, accelerometers, or laser ring gyroscopes utilising the Sagnac effect (previously discussed in these proofs). These systems drift in accuracy over time due to mechanical and environmental effects. Modern inertial navigation systems are accurate to 0.6 nautical miles per hour[2], or just over 1 km per hour. A plane flying at Mach 1 can fly a great circle route in just over 32 hours, so if relying only on INS it should arrive within 32 km of its starting point, which is close enough that a pilot can figure out that it’s back where it started. So in principle we can do this experiment with current technology.

A great circle on our spherical Earth is straightforward. But what does a great circle path look like plotted on a hypothetical flat Earth? Here are a few:

Equator on flat Earth

The equator.

Great circle passing through London and Sydney

Great circle passing through London and Sydney.

Great circle passing through Rome and McMurdo Station, Antarctica

Great circle passing through Rome and McMurdo Station, Antarctica.

As you can see, great circle paths are distorted and misshapen when plotted on a flat Earth. If you follow a straight line across the surface of the Earth as given by inertial navigation systems there’s no obvious reason why you would end up tracing any of these paths, or why you would measure the same distance travelled (40,000 km) over all three paths when they are significantly different sizes on this map. And then consider this one:

Great circle passing through London and the North Pole

Great circle passing through London and the North Pole.

This circle passes through the north and south poles. If you travel on this great circle, then you have to go off one edge of the flat Earth and reappear on the other side. Which seems unlikely.

Travelling in a straight line and ending up where you started makes the most sense if the Earth is a globe.

References:

[1] “MythBusters Episode 173: Walk a Straight Line”, MythBuster Results, https://mythresults.com/walk-a-straight-line (accessed 2019-08-20).

[2] “Inertial Navigation System (INS)”, Skybrary, https://www.skybrary.aero/index.php/Inertial_Navigation_System_(INS) (accessed 2019-08-20).

22. Plate tectonics

Following the rediscovery of the New World by Europeans in the 15th century, the great seafaring nations of Europe rapidly mapped the eastern coastlines of the Americas. Demand for maps grew, not just of the New World, but of the Old as well. This made it possible for a young man (unfortunately women were shepherded into more domestic jobs) to seek his fortune as a mapmaker. One such man was Abraham Ortelius, who lived in Antwerp in the Duchy of Brabant (now part of Belgium).

Abraham Ortelius

Abraham Ortelius, painted by Peter Paul Rubens. (Public domain image from Wikimedia Commons.)

In 1547, at the age of 20, Ortelius began his career as a map engraver and illuminator. He travelled widely across Europe, and met cartographer and mapmaker Gerardus Mercator (15 years his senior, and whose map projection we met in 14. Map projections) in 1554. The two became friends and travelled together, reinforcing Ortelius’s passion for cartography, as well as the technical and scientific aspects of geography. Ortelius went on to produce and publish several maps of his own, culminating in his 1570 publication, Theatrum Orbis Terrarum (“Theatre of the Orb of the World”), now regarded as the first modern atlas of the world (as then known). Previously maps had been sold as individual sheets or bespoke sets customised to specific needs, but this was a curated collection intended to cover the entire known world in a consistent style. The Theatrum was wildly successful, running to 25 editions in seven languages by the time of Ortelius’s death in 1598.

Theatrum Orbis Terrarum

World map plate from Theatrum Orbis Terrarum. (Public domain image from Wikimedia Commons.)

Being intimately familiar with his maps, Ortelius noticed a strange coincidence. In his publication Thesaurus Geographicus (“Geographical Treasury”) he wrote about the resemblance of the shapes of the east coast of the Americas to the west coasts of Europe and Africa across the Atlantic Ocean. He suggested that the Americas may have been “torn away from Europe and Africa … by earthquakes and floods. … The vestiges of the rupture reveal themselves, if someone brings forward a map of the world and considers carefully the coasts of the three.” This is the first known suggestion that the uncanny jigsaw-puzzle appearance of these coastlines might not be a coincidence, but rather a vestige of the continents actually having fitted together in the past.

Ortelius wasn’t the only one to make this observation and reach the same conclusion. Over the next few centuries, similar thoughts were proposed by geographers Theodor Christoph Lilienthal, Alexander von Humboldt, Antonio Snider-Pellegrini, Franklin Coxworthy, Roberto Mantovani, William Henry Pickering, Frank Bursley Taylor, and Eduard Suess. Suess even suggested (in 1885) that at some time in the past all of the Earth’s continents were joined in a single mass, which he gave the name “Gondwana”.

Snider-Pellegrini illustration

Illustration by Antonio Snider-Pellegrini, of his proposal that the Americas had once been adjacent to Europe and Africa. (Public domain image from Wikimedia Commons.)

Although many people had suggested that the continents had once been adjacent, nobody had produced any supporting evidence, nor any believable mechanism for how the continents could move. This changed in 1912, when the German meteorologist and polar scientist Alfred Wegener proposed the theory which he named continental drift. He began with the same observation of the jigsaw nature of the continent shapes, but then he applied the scientific method: he tested his hypothesis. He looked at the geology of coastal regions, examining the types of rocks, the geological structures, and the types of fossils found in places around the world. What he found were remarkable similarities in all of these features on opposite sides of the Atlantic Ocean, and in other locations around the world where he supposed that now-separate landmasses had once been in contact. This is exactly what you would expect to find if a long time ago the continents had been adjacent: plants and animals would have a range spanning across what would later split open and become an ocean, and geological features would be consistent across the divide as well[1].

fossil distribution across continents

Map of similar fossils of non-sea-going lifeforms found across landmasses, providing evidence that they were once joined. (Public domain image from Wikimedia Commons.)

In short, Wegener found and presented evidence in support of his hypothesis. He presented his theory, with the evidence he had gathered, in his 1915 book, Die Entstehung der Kontinente und Ozeane (“The Emergence of the Continents and Oceans”). He too proposed that all of the Earth’s continents were at one stage joined into a single landmass, which he named Pangaea (Greek for “all Earth”)[2].

But Wegener had two problems. Firstly, he still didn’t know how continents could possibly move. Secondly, he wasn’t a geologist, and so the establishment of geologists didn’t take him very seriously, to say the least. But as technology advanced, detailed measurements of the sea floor were made beginning in the late 1940s, including the structures, rock types, and importantly the magnetic properties of the rocks. Everything that mid-20th century geologists found was consistent with the existence of a large crack running down the middle of the Atlantic Ocean, where new rock material was welling up from beneath the ocean floor, and spreading outwards. They also found areas where the Earth’s crust was being squashed together, and either being thrust upwards like wrinkles in a tablecloth (such as the Himalayas mountain range), or plunged below the surface (such as along the west coast of the Americas).

Confronted with overwhelming evidence—which it should be pointed out was both consistent with many other observations, and also explained phenomena such as earthquakes and volcanoes better than older theories—the geological consensus quickly turned around[3]. The newly formulated theory of plate tectonics was as unstoppable as continental drift itself, and revolutionised our understanding of geology in the same way that evolution did for biology. Suddenly everything made sense.

The Earth, we now know, has a relatively thin, solid crust of rocks making up the continents and sea floors. Underneath this thin layer is a thick layer known as the mantle. The uppermost region of the mantle is solid and together with the crust forms what is known as the lithosphere. Below this region, most of the mantle is hot enough that the material there is visco-elastic, meaning it behaves like a thick goopy fluid, deforming and flowing under pressure. This viscous region of the mantle is known as the asthenosphere.

structure of the Earth

Diagram of the Earth’s layers. The lithosphere region is not to scale and would appear much thinner if drawn to scale. (Public domain image from Wikimedia Commons.)

Heat wells up from the more central regions of the Earth (generated by radioactive decay). Just like a boiling pot of water, this sets up convection currents in the asthenosphere, where the hot material flows upward, then sideways, then back down to form a loop. The sideways motion at the top of these convection cells is what carries the crust above, moving it slowly across the surface of the planet.

The Earth’s crust is broken into pieces, called tectonic plates, which fit together along their edges. Each plate is relatively rigid, but moves relative to the other plates. Plates move apart where the upwelling of the convection cells occurs, such as along the Mid-Atlantic Ridge (the previously mentioned crack in the Atlantic Ocean floor), and collide and subduct back down along other edges. At some plate boundaries the plates slide horizontally past one another. All of this motion causes earthquakes and volcanoes, which are mostly concentrated along the plate boundaries. The motion of the plates is slow, around 10-100 millimetres a year. This is too slow to notice over human history, except with high-tech equipment. GPS navigation and laser ranging systems can directly measure the movements of the continents relative to one another, confirming the speed of the motion.

The tectonic plates, then, are shell-like pieces of crust that fit together to form the spherical shape of the Earth’s surface. An equal amount of area is lost at subduction zones as is gained by spreading on sea-floors and in places such as Africa’s Rift Valley, keeping the Earth’s surface area constant. As the plates drift around, they don’t change in size or deform geometrically very much.

Earth's tectonic plates

Sketch of the major tectonic plates as they fit together to form the surface of the Earth.

All of this is consistent and supported by many independent pieces of evidence. Direct measurement shows that the continents are moving, so it’s really just a matter of explaining how. But the motions of the tectonic pates only make sense on a globe.

If the Earth were flat, then sure, you could conceivably have some sort of underlying structure that supports the same sort of convection cells and geological processes of spreading and subduction, leading to earthquakes and volcanoes, and so on. But look at the shapes of the tectonic plates.

Earth's tectonic plates on a flat Earth

Sketch of the major tectonic plates on a flat Earth.

Because of the distortions in the shape of the map relative to a globe, the tectonic plates need to change shape and size as they move across the surface. Not only that, but consider the Antarctic plate, which is a perfectly normal plate on the globe. On the typical Flat Earth model where Antarctica is a barrier of ice around the edge of the circle, the Antarctic plate is a ring. And when it moves, it not only has to deform in shape, but crust has to disappear off one side of the disc and appear on the other side.

So plate tectonics, the single most fundamental and important discovery in the entire field of geology, only makes sense because the Earth is a globe.

Notes:

[1] For readers interested in this particular aspect of continental drift, I’ve previously written about it at greater length in the annotation to this Irregular Webcomic! http://www.irregularwebcomic.net/1946.html

[2] Pangaea is now the accepted scientific term for the unified landmass when all the continents were joined. Eduard Suess’s Gondwana lives on as the name now used to refer to the conjoined southern continents before merging with the northern ones to form Pangaea.

[3] Alfred Wegener is often cited by various people in support of the idea that established science often laughs at revolutionary ideas proposed by outsiders, only for the outsider to later be vindicated. Often by people proposing outlandish or fringe science theories that defy not only scientific consensus but also the boundaries of logic and reason. What they fail to point out is that in all the history of science, Wegener is almost the only such case, whereas almost every other outsider proposing a radical theory is shown to be wrong. As Carl Sagan so eloquently put it in Broca’s Brain:

The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.

Colour naming experiment, part 2

A couple of months ago I wrote about a colour naming experiment that I was planning to perform with the students in the Science Club that I volunteer to teach at a local primary school. You may want to go back and review that post, as today I’m going to talk about the results of the experiment.

I go back to teach the Science Club again next Monday, so it was time to sit down and analyse the results. I went through the answer sheets that the children filled (there were 12 of them, one of the students was sick that day) in and typed the names of each colour from each child into a spreadsheet. I thought it could accumulate the totals and make pie charts for me, but I discovered that I needed to manipulate the data first using a COUNT() function or something. While pondering whether to do this or to export all the data to CSV and write a Python program to do the gruntwork, one of my friends pointed me at this pertinent xkcd comic.

That inspired me to do all the processing in Python, and I discovered to my pleasant surprise that my machine already had the matplotlib library installed, so I could produce pie charts directly from Python. (Without sucking the munged data back into a spreadsheet again to to the graphs as I feared I might have to do.) Anyway, long story short, here are the results (click the image for a huge readable version):

lots of pie charts

[I should point out that of course the colours in this image as displayed on your computer screen are not exactly the same as the colours printed on the paint sample charts that I assembled and gave to the children, because of the vagaries of colour calibration of monitors and the limited colour gamut of the graphic file format. Consider them only an approximation of what the children actually saw.]

That’s a lot to digest. Here are some highlights:

Firstly, here are the colours for which the largest number of people agreed on the name:

most agreed colours

Out of 12 people, three colours had 7 of them agree on what the colour should be called, and one colour had 6 people agree. There was no colour in the entire sample for which a 2/3 majority agreed on the name, let alone anything approaching unanimity. 31 of the 35 colours sampled had less than half the people agree on the name of the colour.

At the other end of the spectrum (ha ha!), here are the colours that had the most different names assigned:

most disagreed colours

Four colours had, in a sample of just 12 people, nine different colour names assigned to them. Three of these colours also had one or two students unable to decide on a name in the time allowed, and they left it blank on the answer sheet.

I should point out that names that were on the answer sheet are written in lower case with an initial capital, while names that the students chose to write-in are written in all-capitals, and “NONE” indicates a student who didn’t give that colour any name. I gave them what I thought was a generous amount of time, but some of the students complained that it was too difficult and obviously struggled to complete the task. I did ask them beforehand if any of them knew they were colourblind, and none of them did. While there are two or three somewhat bizarre names assigned (“brown” for the colour that most kids identified as “lavender” for example), I don’t see any real evidence that any of them are indeed colourblind (confusing reds and greens, for example).

Another thing you’ll notice if you examine the large image of all the pie charts is that the same colour word is used for several different colours, many times over. For example, “olive” is used to describe three different shades of green, as is “tree green”, while “carrot” is used to describe three different shades of orange, “turquoise” is used for three different shades of blue, and so on.

The conclusion from all of this? This basically confirms the research findings that I quoted in the first post on this experiment – that people are incredibly inconsistent when it comes to naming colours. If you say “olive”, or “carrot”, or “turquoise”, people have a reasonable general idea what sort of colour you mean, but many will not be thinking of the same shade of colour that you will, and will fail to pick it out of a line-up.

The second part of the experiment – showing that people are inconsistent with themselves would require me to ask the children to do this entire task a second time. I was planning on doing this, but given how much some of them complained about it the first time, I think I’ll spare them doing it again, and do something a bit more fun with them instead. Hopefully however, when I show them the results on Monday they’ll think it’s pretty amazing and cool, like I do.

20.a Rocket launch sites update

I’ve been busy this week so there won’t be a new article until next week, but today there was an interesting news article on ABC News: NASA scientists visit NT site which could eventually blast rockets to the Moon.

NASA is interested in building a rocket launch site in Australia. Where have they chosen to build it? Near Nhulunbuy, in the far north of the Northern Territory. As well as Cape York Peninsula (mentioned in 20. Rocket launch sites), this is also pretty much as close to the equator as you can get within Australia. In fact it’s even a tiny bit further north than Weipa.

21. Zodiacal light

Brian May is best known as the guitarist of the rock band Queen.[1] The band formed in 1970 with four university students: May, drummer Roger Taylor (not the drummer Roger Taylor who later played for Duran Duran), singer Farrokh “Freddie” Bulsara, and bassist Mike Grose, playing their first gig at Imperial College in London on 18 July. Freddie soon changed his surname to Mercury, and after trying a few other bass players the band settled on John Deacon.

Brian May 1972

Brian May, student, around 1972, with some equipment related to his university studies. (Reproduced from [2].)

While May continued his studies, the fledgling band recorded songs, realeasing a debut self-titled album, Queen, in 1973. It had limited success, but they followed up with two more albums in 1974: Queen II and Sheer Heart Attack. These met with much greater success, reaching numbers 5 and 2 on the UK album charts respectively. With this commercial success, Brian May decided to drop his academic ambitions, leaving his Ph.D. studies incomplete. Queen would go on to become one of the most successful bands of all time.

Lead singer Freddie Mercury died of complications from AIDS in 1991. This devastated the band and they stopped performing and recording for some time. In 1994 they released a final studio album, consisting of reworked material recorded by Mercury before he died plus some new recording to fill gaps. And since then May and Taylor have performed occasional concerts with guest singers, billed as Queen + (singer).

The down time and the wealth accumulated over a successful music career allowed Brian May to apply to resume his Ph.D. studies in 2006. He first had to catch up on 33 years of research in his area of study, then complete his experimental work and write up his thesis. He submitted it in 2007 and graduated as a Doctor of Philosophy in the field of astrophysics in 2008.

Brian May 2008

Dr Brian May, astrophysicist, in 2008. (Public domain image from Wikimedia Commons.)

May’s thesis was titled: A Survey of Radial Velocities in the Zodiacal Dust Cloud.[2] May was able to catch up and complete his thesis because the zodiacal dust cloud is a relatively neglected topic in astrophysics, and there was only a small amount of research done on it in the intervening years.

We’ve already met the zodiacal dust cloud (which is also known as the interplanetary dust cloud). It is a disc of dust particles ranging from 10 to 100 micrometres in size, concentrated in the ecliptic plane, the plane of orbit of the planets. Backscattered reflection off this disc of dust particles causes the previously discussed gegenschein phenomenon, visible as a glow in the night sky at the point directly opposite the sun (i.e. when the sun is hidden behind the Earth).

But that’s not the only visible evidence of the zodiacal dust cloud. As stated in the proof using gegenschein:

Most of the light is scattered by very small angles, emerging close to the direction of the original incoming beam of light. As the scattering angle increases, less and less light is scattered in those directions. Until you reach a point somewhere around 90°, where the scattering is a minimum, and then the intensity of scattered light starts climbing up again as the angle continues to increase. It reaches its second maximum at 180°, where light is reflected directly back towards the source.

This implies that there should be another maximum of light scattered off the zodiacal dust cloud, along lines of sight close to the sun. And indeed there is. It is called the zodiacal light. The zodiacal light was first described scientifically by Giovanni Cassini in 1685[3], though there is some evidence that the phenomenon was known centuries earlier.

Title page of Cassini's discovery

Title page of Cassini’s discovery announcement of the zodiacal light. (Reproduced from [3].)

Unlike gegenschein, which is most easily seen high overhead at midnight, the zodiacal light is best seen just after sunset or just before dawn, because it appears close to the sun. The zodiacal light is a broad, roughly triangular band of light which is broadest at the horizon, narrowing as it extends up into the sky along the ecliptic plane. The broad end of the zodiacal light points directly towards the direction of the sun below the horizon. This in itself provides evidence that the sun is in fact below the Earth’s horizon at night.

zodiacal light at Paranal

Zodiacal light seen from near the tropics, Paranal Observatory, Chile. Note the band of light is almost vertical. (Creative Commons Attribution 4.0 International image by ESO/Y.Beletsky, from Wikimedia Commons.)

The zodiacal light is most easily seen in the tropics, because, as Brian May writes: “it is here that the cone of light is inclined at a high angle to the horizon, making it still visible when the Sun is well below the horizon, and the sky is completely dark.”[2] This is because the zodiacal dust is concentrated in the plane of the ecliptic, so the reflected sunlight forms an elongated band in the sky, showing the plane of the ecliptic, and the ecliptic is at a high, almost vertical angle, when observed from the tropics.

zodiacal light at Washington

Zodiacal light observed from a mid-latitude, Washington D.C., sketched by Étienne Léopold Trouvelot in 1876. The band of light is inclined at an angle. (Public domain image from Wikimedia Commons.)

Unlike most other astronomical phenomena, this shows us in a single glance the position of a well-defined plane in space. From tropical regions, we can see that the plane is close to vertical with respect to the ground. At mid-latitudes, the plane of the zodiacal light is inclined closer to the ground plane. And at polar latitudes the zodiacal light is almost parallel to the ground. These observations show that at different latitudes the surface of the Earth is inclined at different angles to a visible reference plane in the sky. The Earth’s surface must be curved (in fact spherical) for this to be so.

zodiacal light from Europe

Zodiacal light observed from higher latitude, in Europe. The band of light is inclined at an even steeper angle. (Public domain image reproduced from [4].)

[I could not find a good royalty-free image of the zodiacal light from near-polar latitudes, but here is a link to copyright image on Flickr, taken from Kodiak, Alaska. Observe that the band of the zodiacal light (at left) is inclined at more than 45° from the vertical. https://www.flickr.com/photos/photonaddict/39974474754/ ]

zodiacal light at Mauna Kea

Zodiacal light seen over the Submillimetre Array at Mauna Kea Observatories. (Creative Commons Attribution 4.0 International by Steven Keys and keysphotography.com, from Wikimedia Commons.)

Furthermore, at mid-latitudes the zodiacal light is most easily observed at different times in the different hemispheres, and these times change with the date during the year. Around the March equinox, the zodiacal light is best observed from the northern hemisphere after sunset, while it is best observed from the southern hemisphere before dawn. However around the September equinox it is best observed from the northern hemisphere before dawn and from the southern hemisphere after sunset. It is less visible in both hemispheres at either of the solstices.

seasonal variation in zodiacal light from Tenerife

Seasonal variation in visibility of the zodiacal light, as observed by Brian May from Tenerife in 1971. The horizontal axis is day of the year. The central plot shows time of night on the vertical axis, showing periods of dark night sky (blank areas), twilight (horizontal hatched bands), and moonlight (vertical hatched bands). The upper plot shows the angle of inclination of the ecliptic (and hence the zodiacal light) at dawn, which is a maximum of 87° on the September equinox, and a minimum of 35° on the March equinox. The lower plot shows the angle of inclination of the ecliptic at sunset, which is a maximum of 87° on the March equinox. (Reproduced from [2].)

This change in visibility is because of the relative angles of the Earth’s surface to the plane of the dust disc. At the March equinox, northern mid-latitudes are closest to the ecliptic at local sunset, but far from the ecliptic at dawn, while southern mid-latitudes are close to the ecliptic at dawn and far from it at sunset. The situation is reversed at the September equinox. At the solstices, mid-latitudes in both hemispheres are at intermediate positions relative to the ecliptic.

seasonal variation of Earth with respect to ecliptic

Diagram of the Earth’s tilt relative to the ecliptic, showing how different latitudes are further from or closer to the ecliptic at certain times of year and day.

So the different seasonal visibility and angles of the zodiacal light are also caused by the fact that the Earth is spherical, and inclined at an angle to the ecliptic plane. This natural explanation does not carry over to a flat Earth model, and none of the observations of the zodiacal light have any simple explanation.

References:

[1] Google search, “what is brian may famous for”, https://www.google.com/search?q=what+is+brian+may+famous+for (accessed 2019-07-23).

[2] May, B. H. A Survey of Radial Velocities in the Zodiacal Dust Cloud. Ph.D. thesis, Imperial College London, 2008. https://doi.org/10.1007%2F978-0-387-77706-1

[3] Cassini, G. D. “Découverte de la lumière celeste qui paroist dans le zodiaque” (“Discovery of the celestial light that resides in the zodiac”). De l’lmprimerie Royale, par Sebastien Mabre-Cramoisy, Paris, 1685. https://doi.org/10.3931/e-rara-7552

[4] Guillemin, A. Le Ciel Notions Élémentaires D’Astronomie Physique, Libartie Hachette et Cie, Paris, 1877. https://books.google.com/books?id=v6V89Maw_OAC

20. Rocket launch sites

Suppose you are planning to build an orbital rocket launching facility. Where are you going to put it? There are several issues to consider.

  • You want the site to be on politically friendly and stable territory. This strongly biases you to building it in your own country, or a dependent territory. Placing it close to an existing military facility is also useful for logistical reasons, especially if any of the space missions are military in nature.
  • You want to build it far enough away from population centres that if something goes catastrophically wrong there will be minimal damage and casualties, but not so far away that it is logistically difficult to move equipment and personnel there.
  • You want to place the site to take advantage of the fact that the rocket begins its journey with the momentum it has from standing on the ground as the Earth rotates. This is essentially a free boost to its launch speed. Since the Earth rotates west to east, the rocket stationary on the pad relative to the Earth actually begins with a significant momentum in an easterly direction. Rocket engineers would be crazy to ignore this.

One consequence of the rocket’s initial momentum is that it’s much easier to launch a rocket towards the east than towards the west. Launching towards the east, you start with some bonus velocity in the same direction, and so your rocket can get away with being less powerful than otherwise. This represents a serious saving in cost and construction difficulty. If you were to launch a rocket towards the west, you’d have to engineer it to be much more powerful, since it first has to overcome its initial eastward velocity, and then generate the entirety of the westward velocity from scratch. So virtually no rockets are ever launched towards the west. Rockets are occasionally launched to the north or south to put their payloads into polar orbits, but most are placed into so-called near-equatorial orbits that travel substantially west-to-east.

In turn, this means that when selecting a launch site, you want to choose a place where the territory to the eastern side of the site is free of population centres, again to avoid disaster if something goes wrong during a launch. The easiest way to achieve this is to place your launch site on the eastern coast of a landmass, so the rockets launch out over the ocean, though you can also do it if you can find a large unpopulated region and place your launch site near the western side.

When we look at the major rocket launch facilities around the world, they generally follow these principles. The Kennedy Space Center at Cape Canaveral is acceptably near Orlando, Florida, but far enough away to avoid disasters, and adjacent to Cape Canaveral Air Force Station for military logistics. It launches east over the Atlantic Ocean.

Kennedy Space Center

Kennedy Space Center launch pads A (foreground) and B (background). The Atlantic Ocean is to the right. (Public domain image by NASA.)

A NASA historical report has this to say about the choice of a launch site for Saturn series rockets that would later take humans to the moon[1]:

The short-lived plan to transport the Saturn by air was prompted by ABMA’s interest in launching a rocket into equatorial orbit from a site near the Equator; Christmas Island in the Central Pacific was a likely choice. Equatorial launch sites offered certain advantages over facilities within the continental United States. A launching due east from a site on the Equator could take advantage of the earth’s maximum rotational velocity (460 meters per second) to achieve orbital speed. The more frequent overhead passage of the orbiting vehicle above an equatorial base would facilitate tracking and communications. Most important, an equatorial launch site would avoid the costly dogleg technique, a prerequisite for placing rockets into equatorial orbit from sites such as Cape Canaveral, Florida (28 degrees north latitude). The necessary correction in the space vehicle’s trajectory could be very expensive – engineers estimated that doglegging a Saturn vehicle into a low-altitude equatorial orbit from Cape Canaveral used enough extra propellant to reduce the payload by as much as 80%. In higher orbits, the penalty was less severe but still involved at least a 20% loss of payload. There were also significant disadvantages to an equatorial launch base: higher construction costs (about 100% greater), logistics problems, and the hazards of setting up an American base on foreign soil.

Russia’s main launch facility, Baikonur Cosmodrome in Kazakhstan (former USSR territory), launches east over the largely uninhabited Betpak-Dala desert region. China’s Jiuquan Satellite Launch Centre launches east over the uninhabited Altyn-Tagh mountains. The Guiana Space Centre, the major launch facility of the European Space Agency, is located on the coast of French Guiana, an overseas department of France on the north-east coast of South America, where it launches east over the Atlantic Ocean.

Guiana Space Centre

Guiana Space Centre, French Guiana. The Atlantic Ocean is in the background. (Photo: ESA-Stephane Corvaja, released under ESA Standard Licence.)

Another consideration when choosing your rocket launching site is that the initial momentum boost provided by the Earth’s rotation is greatest at the equator, where the rotational speed of the Earth’s surface is greatest. At the equator, the surface is moving 40,000 km (the circumference of the Earth) per day, or 1670 km/h. Compare this to latitude 41° (roughly New York City, or Madrid), where the speed is 1260 km/h, and you see that our rockets get a free 400 km/h boost by being launched from the equator compared to these locations. So you want to place your launch facility as close to the equator as is practical, given the other considerations.

Rotation of Earth

Because the Earth is a rotating globe, the equatorial regions are moving faster than anywhere else, and provide more of a boost to rocket launch velocities.

The European Space Agency, in particular, has problems with launching rockets from Europe, because of its dense population, unavailability of an eastern coastline, and distance from the equator. This makes French Guiana much more attractive, even though it’s so far away. The USA has placed its major launch facility in just about the best location possible in the continental US. Anywhere closer to the equator on the east coast is taken up by Miami’s urban sprawl. The former USSR went for southern Kazakhstan as a compromise between getting as far south as possible, and being close enough to Moscow. China’s more southern and coastal regions are much more heavily populated, so they went with a remote inland area (possibly also to help keep it hidden for military reasons).

All of these facilities so far are in the northern hemisphere. There are no major rocket launch facilities in the southern hemisphere, and in fact only two sites from where orbital flight has been achieved: Australia’s Woomera Range Complex, which is a remote air force base chosen historically for military logistical reasons (including nuclear weapons testing as well as rocketry in the wake of World War II), and New Zealand’s Rocket Lab Launch Complex 1, a new private facility for launching small satellites, whose location was governed by the ability to privately acquire and develop land.

But if you were to build a major launch facility in the southern hemisphere, where would you put it?

A major space facility was first proposed for Australia in 1986, with plans for it to be the world’s first commercial spaceport. The proposed site? Near Weipa, on the Cape York Peninsula, essentially as close to the equator as it’s possible to get in Australia.

Site of Weipa in Australia

Site of Weipa in Australia. Apart from Darwin which is at almost exactly the same latitude, there is no larger town further north in Australia. (Adapted from a Creative Commons Attribution 4.0 International image by John Tann, from Wikimedia Commons.)

The proposal eventually floundered due to lack of money and protests from indigenous land owners, but there is now a current State Government inquiry into constructing a satellite launching facility in Queensland, again in the far north. As a news story points out, “From a very simple perspective, we’ve got potential launch capacity, being closer to the equator in a place like Queensland,” and “the best place to launch satellites from Australia is the coast of Queensland. The closer you are to the equator, the more kick you get from the Earth’s spin.”[2]

So rocket engineers in the southern hemisphere definitely want to build their launch facilities as close to the equator as practically possible too. Repeating what I said earlier, you’d be crazy not to. And this is a consequence of the fact that the Earth is a rotating globe.

On the other hand, if the Earth were flat and non-rotating (as is the case in the most popular flat Earth models), there would be no such incentive to build your launch facility anywhere compared to anywhere else, and equatorial locations would not be so coveted. And if the Earth were flat and rotating around the north pole, then you’d get your best bang for buck not near the equator, but near the rim of the rotating disc, where the linear speed of rotation is highest. If that were the case, then everyone would be clamouring to build their launch sites as close to Antarctica as possible, which is clearly not the case in the real (globular) world.

[1] Benson, C. D., Faherty, W. B. Moonport: A History of Apollo Launch Facilities and Operations. Chapter 1.2, NASA Special Publication-4204 in the NASA History Series, 1978. https://www.hq.nasa.gov/office/pao/History/SP-4204/contents.html (accessed 2019-07-15).

[2] “Rocket launches touted for Queensland as State Government launches space industry inquiry”. ABC News, 6 September 2018. https://www.abc.net.au/news/2018-09-06/queensland-shoots-for-the-stars-to-become-space-hub/10205686 (accessed 2019-07-15).