Pages

Sunday, September 22, 2019

The Physical Constants


“What’s your favourite word?”
“Passion,” she said. “And yours?”
“Wonder,” he replied.


There are few things as steadfastly reliable as the physical constants. If mankind were to create the universe as he imagines it to be he could use these constants to set the rulebook by which all matter obey. The physical constants are, of course, determined by mankind’s observations of the universe and not the other way around. They are under constant and rigorous validation - the status bestowed upon them subject to scrutiny and change. But so far so good. So let’s indulge ourselves.




Ok. That’s a lot of physical constants. I’m no physicist and to tackle a topic like this I’ll actually need to study physics. But that takes proper effort. And maths. And that’s not the way I roll. So, this being my blog and all, I’m just going to talk about the speed of light and Planck’s Constant. Without maths.

That the speed of light is a universal constant has profound ramifications on how we think about space and time. How we perceive space and time is an altogether different matter and we’ll get to that a bit later. 

The speed of light - or, more accurately, the speed of a massless particle such as a photon travelling in a vacuum - is 299,792,458m/s. The curious thing about the speed of light is that it measures the same for every observer regardless of how fast they are travelling. That is to say, if you are travelling in a vacuum at 15m/s directly towards a photon you won’t measure the photon’s relative speed at 299,792,473m/s (299,792,458 + 15) as Newtonian physics would suggest. Conversely, if you are travelling 15m/s in the same direction of the photon you won’t measure the photon’s relative speed at 299,792,443m/s (299,792,458 - 15). In both cases you would measure exactly 299,792,458m/s (ie the speed of light). This might seem odd but that’s how it measures.

Do the measurement enough times, in different settings and different scenarios, and you end up with the same result. Eventually you have to accept the result and incorporate that finding into any new theory on how the universe works. If the theory can make predictions that are replicable in future observations of the universe then you end up with a physical constant. And so it is with the speed of light.




If the speed of light is a physical constant - and speed is distance travelled divided by the time taken - then distance (ie the space between points) and time cannot be constant. This is the fundamental shift from Newtonian physics to Relativistic mechanics. Newtonian mechanics (the foundation of classical mechanics) still has its application in the tangible world of bridges and cannon balls. But relativistic mechanics takes over when things get very big, very small, or very fast. 

Here’s a commonly used example to help flip the brain into thinking more relativistically. A photon from the sun hits your eyeball in about 8 minutes. If you assume a vacuum in interstellar space and ignore the Earth’s atmosphere it still takes the photon about 8 minutes to arrive. But from the perspective of the photon it hits your eyeball the instant it was created. That is to say, a photon, travelling at the speed of light in a vacuum, has no time dimension. Indeed, all massless particles (photons, gluons and gravitational waves) travel at the speed of light in a vacuum and do not clock any time. 

There’s a couple of things to unpack in that paragraph. First, we need to understand why something travelling at the speed of light cannot have a time dimension. The easiest way to do this is to think of a photon travelling at the speed of light in a vacuum. From that photon’s “point of view” another photon travelling in exactly the same direction will also appear to travel at the speed of light. The only way that can occur is if time stands still. Another thought experiment that might appeal to Einstein - as the photon’s “point of view” might be better explained as length contraction to zero in the direction of its travel (see below) - is the concept of time dilation taken to its zenith. This, in short, can be summarised thus: an observer looking at a photon travelling at the speed of light will note that an identical, synchronised clock carried by that photon has not just slowed but actually stopped. That is to say, a clock carried by a photon runs infinitely slower when compared to everything else not travelling at the speed of light. Quite simply then: an infinitely slower clock is a clock that doesn’t register any time.





The second thing to unpack is how we can progress from an observer-dependant framework to one that is observer-independent. Let’s restate the last sentence, ‘all massless particles, in a vacuum, travel at the speed of light’. This is more accurately stated as, ‘all massless particles, in a vacuum, travel at the speed of light when we observe them’. Photons and other subatomic particles exist whether or not we bother to take the effort and make the observation and measure their characteristics. We know that because we can see the effect they have on the universe around us without having to observe the particles directly. We tend to think of a photon as a light bulb travelling through a room at a very fast speed (in physics this “room” is referred to as an inertial frame of reference). That’s because the concept of locality within three dimensional space and the arrow of time is deeply ingrained into how we see ourselves within our environment. In other words, we are using an observer-dependent framework where space and time are known accepted constants. This doesn’t work so well when dealing with the subatomic realm where space and time are not constant and the primacy of the speed of light becomes significant.


A photon isn’t a light bulb travelling in space at a very fast speed. It just “is”. When we try to define a light particle dimensionally it literally takes on the dimensional constraints of speed, time, and locality within space. What happens if we flip it over and try to understand the universe from the photon’s “point of view”? Better still, what happens if we try to develop an understanding of the subatomic realm using an observer-independent framework? Well, physicists are trying to do just that.

The subatomic realm is fast and fuzzy and the best tool we have to understand its physics is quantum mechanics. Quantum mechanics accepts the primacy of the speed of light but its starring feature - at least for lay people like me - is how it doubles down on the problem of dimensional space or, more accurately, the concept of locality. Quantum mechanics arose because classical mechanics failed to adequately explain observations like black body radiation and the photoelectric effect. This lead to the concepts of quantisation and wave-particle duality then to quantum physics reimagination in the 1920’s with Heisenberg’s uncertainty principle and quantum field theory. 

The history and development of the first two concepts (quantisation and wave-particle duality) are brilliantly described by YouTuber King Crocoduck..



.. who is yet to complete parts 3, 4, and 5. 

But we are not quite done with relativistic mechanics and and how it changes the way we think about dimensional space. Taking the perspective of the photon from the sun as in the example above - and, again, assuming a vacuum between the sun and your eyeball - the distance or “space” between the sun and your eyeball, like time, also has no dimension for that photon. That is to say, at the speed of light in a vacuum, space contracts to nothing specifically in the direction of the photon’s travel (if you will, the photon only “sees” the two spatial dimensions orthogonal to its direction of travel). This is length contraction - as predicted by special relativity - taken to its zenith.

Length contraction can be thought of as the flip side of time dilation from the perspective of the “other” observer when dealing with the problem of measuring dimensional time and dimensional space within relativistic mechanics. Both are encapsulated in a one-parameter linear transformation (that parameter being velocity) called the Lorentz transformation. This actually involves a bit of maths so I’ll let someone else explain it.

Derivation of the Lorenz transformation:


Application of the Lorenz transformation:


If maths isn’t your thing then this might be a simpler explanation (assuming that you intrinsically accept that the underlying maths works out):


The study of physics has very fluid borders that permeate within its own discipline and into many other disciplines given its elemental role in science and human inquisition. The study of the mechanics of motion and energy can be broadly separated into classical mechanics and quantum mechanics with relativistic mechanics - predicated on the speed of light being a universal constant - providing a revolutionary way of thinking about space and time (special relativity being critical in the study of quantum mechanics among other things; general relativity being critical in the study of astronomy and cosmology among other things).

Most things studied in the subatomic realm have mass and/ or travel at speeds slower than the speed of light. Nonetheless they still travel at immensely fast speeds (hence the need to factor special relativity into the calculation). The weirdness of quantum mechanics arises because we are not used to thinking of space, time, and locality from the perspective of a subatomic particle. Observations of subatomic particles thus appear counterintuitive given the background of our normal daily experience. At a fundamental level quantum mechanics is to the study of space, time, and locality as general relativity was/ is to the study of gravity. Sure, when you drop a bowling ball on your foot you automatically reference the gravitational pull of the Earth as Newton famously did rather than curse of the warping of spacetime as Einstein might have recommended - but at least Einstein provided a reason that Newton could not. 



So what exactly is space, time, and locality? Well, don’t ask me - there are many other people far more qualified than this dilettante. That aside, I think there are a few things that we can say about it. For a start, special relativity tells us that observer-independent space and time should be taken together as a four-dimensional manifold of “spacetime”. On one hand this evolved into general relativity with the warping of the 4D manifold under the influence of mass and momentum hence John Wheeler’s quip, “mass tells spacetime how to curve, spacetime tells mass how to move” or some variation of such. On the other hand this concept morphs into something even harder to comprehend when we take it to the subatomic realm - when we try to understand how things that are “very small and very fast” or “probabilistic and wave-like” exist and interact with each other. 




The diagram above is often used to illustrate the warping of spacetime under the influence a large mass such as a planet. In other words: it is a spacetime diagram of gravity. You may also see examples that expand on this - videos where balls roll down the valley of a stretched tarpaulin model, or an illustration of light particles skiving off the dimensional plane (the latter representing the bending of massless photons around massive objects the observation of which brought general relativity and the man that thought it up firmly into the public arena). Nonetheless, if you are new to this area I suspect that this is a very confusing diagram to understand. This is because it is difficult to convey the properties of a 4D manifold using the format of a 2D diagram. Even with the assistance of lengthy explanations, even more diagrams, a few videos, and - God forbid - maths, it is still something that takes time to mull over. 

Well quantum mechanics is that much harder again. So much so that physicists who spend their entire lives working in this area have yet to come to terms with its fundamental nature. Yes, the application of quantum theory works remarkably well - the problem lies in what the theory actually describes. The subatomic realm can be described as definitive and particle-like or probabilistic and wave-like but these descriptions only hint at the underlying structure of the real thing. Are there more dimensions that haven’t been accounted for?  Is relativistic theory merely an approximation of spacetime that is even more convoluted or expansive? Why does every interpretation of quantum mechanics from Copenhagen onwards appear to involve some leap of faith? Does the linear and causal structure of language and, indeed, mathematics limit our ability to understand what is nonsensical to the human experience? 





Space, time, and locality are literal terms that carry a bias. So let’s start over again. To do that we need to wind back ninety years to a guy named Paul Dirac, a theoretical physicist who came up with a relativistic equation of motion for the wave function of the electron.

First, atomic spectra and the Bohr model of the atom. 




When a gas is heated it emits light that diffracts in a characteristic series of bands called its emission spectrum. Conversely, when a gas is cooled and light is shone through it the light that passes through [the gas] diffracts in a characteristic absorption spectrum that is missing exactly those frequencies seen the emission spectrum. The Bohr model of the atom provided a very handy explanation for this phenomenon. The proposal is that the negatively-charged electron orbits the positively-charged nucleus at set energy levels. When excited (ie when the gas is heated) the electron can jump down an energy level thereby releasing light at a set energy level (ie at a set frequency). The converse is also true, when the gas is cooled the electron can absorb light but only at the energy level (frequency) that takes it to the next level. The electron can have a number of energy levels around the nucleus but each level has a definite energy state (ie it is “quantised”) and the difference between them corresponds to the frequency of the light spectrum that is emitted or absorbed. Every atom has a unique and characteristic emission/ absorption spectrum. 




Bohr’s model (proposed in 1913) superseded the Rutherford model which, in turn, had superseded the Thomson model of the atom. The experiments that lead to these developments  also firmly placed the nucleus as a positively charged element at the centre of the atom with electrons as negatively charged particles that orbited the nucleus (JJ Thompson discovered the electron in 1895, Ernest Rutherford the proton in 1917). Light, however, was understood to be a wave..

So what is a wave? Well, sound waves are vibrations of air molecules. That is to say that sound waves are waveforms that occur from the disturbance of a medium consisting of air molecules. The same applies to waves of water (with water being the medium). The entity of a sound wave or a water wave - and the energy it carries - results from particles of the medium in which it travels “moving out of position” then returning to a resting state. The waveform is therefore the oscillation of the medium in which the entity of the wave travels.

The study of electromagnetism (ie the study of the interaction between charged particles) has a long and fascinating history. In 2011 the BBC made an excellent series called “The Story of Electricity” which can be seen here..


In short, electricity and magnetism are two sides of the same phenomenon. The first to make the connection between electricity and magnetism was Hans Christian Ørsted in 1820 when he noticed that an electric current (ie movement of charge) could move a magnetic needle. About a decade later Michael Faraday showed that a changing magnetic field could likewise produce a current (electromagnetic induction). James Clark Maxwell then developed the mathematical equations that described how electromagnetic fields develop from the movement of charge/ current and showed that electromagnetic radiation (propagating waves of the electromagnetic field consisting of what we now know as the EM spectrum from the longest of radio waves to the shortest of gamma rays) travelled through a vacuum at the speed of light. 

Light disperses and diffracts both of which are wave-like properties. The double slit experiment also revealed the property of interference which adds further confirmation for the wave nature of light - ie light shone between two slits creates an interference pattern of alternate light and dark areas corresponding to wave peaks and troughs that alternately reinforce and cancel each other. 










Because light can travel through a vacuum it was thought that light waves propagated through an invisible, irreducible, infinite, and otherwise indifferent (at least where interactions in physics are concerned) medium known as the luminiferous ether. By the end of the 19th century - and despite the novel use of complex mathematical tools - the ether concept was starting to lose favour.. but it wasn’t dead. Not yet.

At the start of the twentieth century, the study of the photoelectric effect, and, in particular, Einstein’s proposition of a packet (or “quanta”) of light which we now call a photon, lead to the idea that light also exists as a particle. Classical electromagnetism predicts that a light source striking a metallic surface will transmit energy to the material that leads to the release of electrons (also called photoelectrons) once the certain “work” threshold has been overcome. If this is sliding scale of increasing energy that builds up, reaches threshold, then releases photoelectrons, then all good and well. But that’s not the case. Experiment shows that incident light needs to be of a certain threshold frequency (ie energy level) to release photoelectrons with higher incident light frequencies imparting increasing kinetic energy for the emitted electrons. What’s more, at these frequencies the electron emission was effectively instantaneous. At lower frequencies there is no release of electrons regardless of the brightness (intensity) or duration of the incident light. In other words: red light regardless of intensity or duration does not release any photoelectrons from a metal plate but green light (regardless of intensity and duration) releases photoelectrons of a certain kinetic energy and blue light (again, regardless of intensity and duration) releases photoelectrons at a certain kinetic energy that is a set amount higher than that of green light. This was inconsistent with Maxwell’s classical electromagnetism. In 1905 Einstein came up with an equation that characterised the kinetic energy of the emitted electron based on the frequency of the incident light and Planck’s Constant minus the work function required to release the electron. Interestingly, at the time Max Planck (and Niels Bohr) would have none of it. Nonetheless, prior work from Wilhelm Röntgen and later work from Arthur Compton added further evidence for the particle nature of light. In 1918 Planck received the Nobel prize for his work on energy quanta. In 1921 Einstein received his Nobel prize for his work on the photoelectric effect.






So light can display both quantised, particle behaviour or stretched-out, wave-like behaviour. But if light can show features of both is it possible that matter - classically described as unibody or particulate (ie protons, electrons, a cow, or a car) - also display wave-like behaviour? Well a French duke who first studied history and then went on to become physicist thought it a reasonable question and went out to solve it. His name was Louis de Broglie.

De Broglie put forward his preposterous proposition in a doctoral thesis in 1924. He postulated the wave nature of electrons by reworking the equation of Einstein’s photoelectric effect and hypothesised that all matter has wave properties. The maths is actually quite simple when someone else does it for you:


The result was magnificent. The wave nature of the electron provided the mathematical structure for what Bohr had suggested but had never really explained. If electrons have wave properties then its orbit around the nucleus could be characterised by a standing wave which has to be an integer of its wavelength. These standing waves correspond to the allowed energy levels of the electron orbits suggested by Bohr which, in turn, explained atomic spectra. In other words, the absorption spectrum of an element such as hydrogen (which has one electron orbiting a nucleus consisting of one proton and one neutron) can be explained by the wavelength of its electron which dictates the specific quanta of energy absorption that takes that electron from one standing wave to a higher standing wave which is an exact integer of that electron’s wavelength. Anything that isn’t an integer of of the electron’s wavelength results in a waveform that cancels itself. The reverse applies to the emission spectrum.


Standing wave = waveform that fits between between two fixed points (like a plucked string) which has to be an integer of the wave’s ½ wavelength. An electron’s “orbit” conceptually has to be an integer of its wavelength rather than ½, 1 ½, 2 ½, etc as the antinodes will cancel out.







The committee looking into the validity of the paper could find no fault in the mathematics but could not come to terms with its implications. So they sent it off to Einstein who thought the paper had merit and de Broglie got his doctorate. The first double slit experiment illustrating the wave nature of light was performed in 1807. In 1927 Clinton Davisson and Lester Germer did a much more elaborate but essentially the same double slit experiment for electrons thereby showing that electrons also display wave-like interference patterns.   Since then atoms and even molecules have also been shown to show wave-like interference patterns. In 1929 de Broglie received the Noble prize for his insight into the wave nature of matter.

Ok. Let’s take a breather. 

Photons and electrons can display particle-like behaviour and wave-like behaviour. The deBroglie wavelength equation applies to all matter which implies that all particles and unibodies (such as a human) also have a wave description. The fact that unibodies beyond those that inhabit the atomic/ subatomic realm have wavelengths that are so minute as to be irrelevant means that macroscopic things such as humans don’t diffract when they walk through a doorway. The interesting thing is that the particle description and wave description of an entity are incompatible experimentally and mathematically. That is to say they have to be one or the other but not both at the same time. We’ll get to that.

I also need to get to Paul Dirac and his relativistic equation for the wave function of the electron. I will get there but I want to jump ahead to where Dirac eventually takes us which is Quantum Field Theory. What Quantum Field Theory (QFT) proposes is that the elementary particles of the subatomic realm don’t really exist as particles or waves but display features of each depending on how we choose to look at it. In short, QFT takes the Standard Model of particle physics and conceptualises the particles as localised perturbations of the quantum field in which they belong.

If you will, I don’t think it is too mischievous to call this “ether version 2”. Ether (or aether), for those too young to learn about this historical but somewhat dated concept, is the fifth element added by Aristotle to the previously described four basic elements (earth, water, air, fire) proposed by Empedocles. Ether was something that stood outside the material world - something that could be considered, in retrospect, to be the dumping ground for stuff that occurred (and could be observed) in the material world but could not be explained from within it. By the start of the twentieth century it was recognised that there were a few significant hurdles in the study of physics. Two important hurdles were the explanation of gravity and the stumbling block of the luminiferous ether (= ether’s most current conceptualisation challenged by a number of experimental results including the Michelson-Morley experiment). Einstein might have got the Nobel prize for his work on the photoelectric effect but the genius for which he is remembered is the development of General Relativity which explained gravity as the curvature of spacetime. The need for a luminiferous ether disappeared with the birth of Relativistic Theory and Quantum Mechanics in the first quarter of the 20th century but this brought its own problems as we will discuss below. If you will, QFT is a far more sophisticated and nuanced version of the ether concept.


QFT proposes that quantum fields are everywhere and what we think of as waves or particles are essentially localised vibrations within these quantum fields. The quantum field is the “medium” of the quantum realm just like like air is for sound waves and water is for, um, water waves. Ether had its historical role in the transmission of light/ electromagnetic radiation (and gravity). Unlike ether, QFT has interacting fields that correspond to each and every subatomic entity of the Standard Model. Also, unlike ether, QFT is a remarkably powerful tool in physics.


This is not a pipe.


And this is not a cat.


And this not quantum reality..
.. but it is an important facet of the best description we currently have for it.


The Standard Model of Elementary Particles describes two separate groups of identical particles called fermions (which are matter particles) and bosons (which are force carriers). Identical particles are particles that share identical properties and are thereby indistinguishable from each other - eg every electron has exactly the same intrinsic characteristic as every other electron, every photon has the exactly same intrinsic characteristic as every other photon. The elementary identical particles are those listed in the Standard Model but protons and neutrons (made up of three quarks joined together by gluons), atomic nuclei and molecules are also examples of identical particles. Protons and neutrons also belong to another grouping of identical particles called Hadrons. The beauty of identical particles is that they can be analysed using statistical techniques. And statistical mechanics is an indispensable tool of modern physics. 






At a fundamental level, there are two ways to investigate the interactions of a closed system. The first is to track the movement and behaviour of each individual element of the system - eg when studying the interactions of a tribe of chimpanzees. The other is to apply statistical methods to analyse the behaviour of identical elements within that closed system - eg the experiments by Rutherford and Bohr which gave insight to atomic structure. When dealing with the tiniest of things (eg thermodynamics and quantum mechanics) statistical methods (specifically statistical mechanics) help resolve experimental results into something that takes on meaning and predictability. Statistics, however, cannot predict a specific outcome. It can only provide a probability distribution of possible outcomes

Until proven otherwise there are three intrinsic characteristics of elementary particles: mass, charge and spin. Because these are fundamental properties they are also incredibly difficult to define with universal precision. It suffices to say that physicist who deal with these things generally agree on what these properties represent just as painters generally agree on the hue, tone and saturation of a particular colour of blue. Broadly speaking, spin  in quantum mechanics refers to the intrinsic form of angular momentum (it is not the same as orbital spin) and best left at that. Fermions have half-integer spins and bosons have full-integer spins. The Dirac equation relates specifically to the electron and generally to spin-1/2 particles with non-zero rest mass. 


(.. and yes, I have also read Jim Baggott’s excellent book “Mass”.)


Paul Dirac takes us deep into the field of mathematical physics which itself is a branch of applied mathematics. If you, like me, skipped the past two thousand years of mathematical insights then we are Alice and this is clearly Wonderland. The mathematical world beyond basic arithmetic has taken smarts to equations that explain the nature of - or rather, provide the mathematical structure for - curves, waves, geometric shapes, multiple dimensions and statistics. Paul Dirac was a theoretical physicist. This means he looked at ways to interpret experimental results, came up with a mathematical formulation for that interpretation, then checked for its validation in future experiments. Dirac worked at a time when relativistic mechanics was well accepted and the foundation of quantum mechanics were being elaborated by the likes of Werner Heisenberg, Wolfgang Pauli, and Erwin Schrödinger. Dirac’s aim was to explain the movement of the electron relativistically while accommodating the principles of quantum mechanics thereby giving the explanation of atomic spectra a more rigorous and up-to-date formulation. Well, he got what he set out out to achieve - so another tick for the phenomenology of quantum mechanical principles. But the equation that now bears his name also revealed a lot more about the quantum world.




In simple terms, the Schrödinger equation is the quantum mechanical equivalent of Newton’s second law of motion. Newton’s second law (a=F/m, more commonly stated as F=ma) relates the three parameters of force, mass and acceleration of a particle in motion (ie a particle not at equilibrium) and allows calculation of the third parameter if the other two are known. In classical mechanics Newton’s second law allows prediction of where a particle will be in the future when a set of given conditions (eg the particle’s position in space and time, the prevailing wind velocities etc) are known. The Schrödinger equation is a type of wave equation that applies to quantum mechanical systems. It arose as an alternative but equivalent computation to matrix mechanics (see below). In wave equations individual outcomes arise from chance but the equation allows the prediction of the probability distribution for a large number of outcomes. Although Schrödinger’s equation describes the wave function of the quantum system it does not actually reveal what the wave function is (ie the wave function is the mathematical description of a wave but what exactly is “waving” and what is the medium in which it travels?). 






A fermionic field (spin-1/2 field) is a quantum field where the quantum is a fermion (matter particle). Fermionic fields differ fundamentally from bosonic fields in the way their constituents behave with the first obeying Fermi-Dirac statistics and the second obeying Bose-Einstein statistics (these statistics describe the distribution of energy states in systems consisting of many identical particles). Fermi-Dirac statistics takes into account the Pauli exclusion principle (which states that fermions cannot be at the same place, at the same time, with the same energy) which draws on Wolfgang Pauli’s identification a fourth component of the electron’s quantum state (the fourth quantum number now recognised as electron spin - ie if two electrons share the first three quantum numbers then one must be spin up, the other spin down). Bosons (force particles), on the other hand, are not restricted to single-occupancy quantum states (a good example being a laser beam consisting of “in-phase” photons).








Enrico Fermi (the same guy who worked on the Manhattan project with Robert Oppenheimer et al) was the first to develop the statistical mechanics that took account of Pauli’s exclusion principle (to close the loop, particles that obey the exclusion principle are now called fermions while those that do not are called bosons). Dirac independently arrived at the same statistical formulation a little later by studying the work of Werner Heisenberg whose endeavours - through the mathematical guidance of Max Born and Pascual Jordan - lead to the development of matrix mechanics (specifically, the application of a mathematical tool for the non-commutating operations proposed by Heisenberg to explain observations of an electron’s original and final positions). It was Heisenberg’s remarkable insight that freed observations of the electron from the shackles of classical mechanics. Sixty days later, at the end of 1925, the first conceptually autonomous and logically consistent formulation of quantum mechanics had arrived.





Vectors as an example of a commutative operation.


Twists of a Rubik’s cube as an example of a noncommutative operation.


Dirac took Heisenberg’s work in a different direction. Dirac recognised that Heisenberg’s operations had a similar structure to another mathematical tool - called Poisson brackets - used in equations for particle motion. He then developed a quantum theory based on non-commuting dynamical variables which became canonical quantisation (regarded as the most significant and profound general formulation for quantum mechanics). 

It was now 1926. 


In short, the first quarter of the 20th century had seen quantum mechanics trickle into existence and evolve as an ad hoc redecoration of classical physics under the direction of Bohr, Plank and Einstein. But the evermore problematic disjunction between observations of the quantum realm and the pseudo-classical mechanics used to describe it meant some reimagination was in order. By the second quarter - guided by the likes of Heisenberg, Schrödinger, Pauli, and Dirac - a revolution was underway and quantum mechanics was unstoppable.

The Dirac equation (established in 1928 and published a year later) is a rigorous and thoroughly mathematical description that draws from the work of many pioneers in the field of quantum mechanics. It takes Schrödinger’s wave mechanics and elements of Heisenberg’s matrix mechanics and puts them in the context of fermionic fields while taking into account special relativity. As stated a little while back, it is a relativistic equation of motion for the wave function of the electron. Like the Schrödinger equation it is a wave equation. It ranks with the equations of Maxwell (electromagnetism) and Einstein (general relativity) for the pivotal role in played in furthering the frontiers of physics.



The mathematically gifted who want to go turbo on this can take a look here..

.. although you might want to start here.. 

The Dirac equation revealed that electrons have spin (clarifying the nascent work of Pauli), predicted the presence of the positron (and thereby the presence of antimatter), and lay the foundation for quantum electrodynamics. In 1930 Paul Dirac wrote an influential book called “The Principles of Quantum Mechanics” which outlined the logical process of developing a new theoretical framework “by working on the great principles, with the details left to look after themselves". Its 257 pages contained 785 equations and didn’t have any diagrams.

Ok. Let’s take another breather.

Maths is the language of modern physics. Like all languages maths arose as a simple means of communicating then evolved into a useful tool for describing the world around us. Beyond that, language, including maths, can take us to an equally wonderful world of abstract thought. Although all languages have structure maths has a particularly rigorous and methodical structure. And the maths of quantum mechanics is abstract and impenetrable for all but the most learned and determined. This hashed-up, cobbled-together storyline cannot hope to convey the power of a mathematical equation. Superficial? Yes. Simplified? Yes. Prone to tangent abstractions? Umm.. well, yes. But for the vast majority of us that’s all we’ve got. From a maths perspective quantum mechanics is simply physics. From the perspective of the written or spoken language quantum mechanics sounds a lot like metaphysics. 






In 1927 some of the greatest minds met at the 5th Solvay International Conference to flesh out quantum mechanics and outline what the findings meant. Disagreements reigned (it didn’t help that three of the revolutionaries pulling down the shackles of the establishment - Heisenberg, Pauli, and Dirac - were in their early to mid twenties) but things progressed quickly over the ensuing years. Heisenberg’s uncertainty principle (1927) lead him to intuit that the observation of an event affected its outcome. This view was opposed by Bohr and his intuition that uncertainty itself (ie probabilistic outcomes) was an intrinsic feature of the quantum realm. In February, 1932 the neutron was discovered by James Chadwick. Six months later the positron made its appearance. A year earlier Edwin Hubble published a series of observations showing the universe was expanding (shattering Einstein’s static universe and putting in doubt the cosmological constant he derived to balance the effect of gravity) and set the scene for dark matter (not to be confused with antimatter) and dark energy. Combining Hubble’s findings and general relativity meant the field of cosmology also underwent a revolution of its own (it’s 3rd after Claudius Ptolemy and Nicolaus Copernicus). In 1937 the muon was discovered by Carl Anderson (who also made the discovery of antimatter in the form of the positron) and Seth Neddermeyer. In 1938 nuclear fission was discovered by Otto Hahn. In 1939 the second world war broke out and a lot of physicists redistributed out of from the axis powers - many to the United States. In 1945 two atomic bombs were dropped, one over Hiroshima and another over Nagasaki. In 1947 the pion was discovered by Powell, Occhialini and Lettes.




Back to the job.

Quantum electrodynamics (QED) is the pièce de résistance of quantum mechanics. Of the various theories that, in combination, make up the Standard Model of Particle Physics, it is the most precise and well-understood. QED specifically deals with how electrons interact with the electromagnetic field (ie with photons) which is, in essence, a theory of how matter interacts with light. It also allows physicists to deal with the anti-electron - ie the positron. The positron was initially a problematic outcome of the Dirac equation as the maths pumped out two equally valid solutions. At the time Dirac thought he had found a relationship between the two known subatomic particles which, conveniently, had opposite charge (ie the proton and the electron) and hoped future research could somehow explain the significant difference in mass (a proton’s mass is ~2,000x that of an electron). The answer instead was the positron (postulated by Dirac three years after he derived his equation and formally identified the following year). The positron is the antimatter counterpart of the electron and shares all the electron’s properties with the exception of having a reverse charge. When positrons and electrons collide they annihilate each other with a release of photons - which, incidentally, is how a PET (positron emission tomography) scanner works (ie positron-emitting radioactive isotopes are added to a substrate such as glucose which then gets taken up by tissue such as active areas of the brain: the released positrons then strike electrons in the tissue and the resultant gamma radiation is picked up by detectors).





QED draws from a wealth of research with some of the key contributions outlined in the wordage above. If you will, it is the grand union of quantum mechanics and classical electromagnetism (the latter based on the concept of electromagnetic fields). In the late 1920s physicists proposed an alternative structure: quantum fields. In quantum field theory the quantised particle is conceptualised as a localised excitation of a quantum field. QED was the first of these field theories. A quantum field theory for electromagnetism was postulated by Hans Bethe and Enrico Fermi in 1932 with the proposition that electron repulsion - considered by classical physics to be a result of the repulsion of charged electron “fields” getting too close - could be explained in quantum terms through the exchange of a “virtual photon” between the two electrons. From a layman’s perspective this is either quite clever or really devious as it simply exchanged a classical field medium you couldn’t see for a quantum photon you couldn’t see. But it was consistent. In a reality that has to take into account both wave and particle properties for electrons and photons this “quantum interaction” of the electron field (the quantum being the electron) and the electromagnetic field (the quantum being the photon) makes sense because it is internally consistent with the theory of quantum fields (we’ll discuss how quantum mechanics deals with the concept of a “field” below). This is, of course, a circular argument. But it kinda makes sense so let’s run with it for now. 

Getting the maths to work was a lot harder.





Particle physics was a bit of a mess at the time of the Solvay Conference in 1927. It tumbled on through the second world war and risked falling apart altogether at another invitation-only gathering at Shelter Island in 1947. It became obvious that understanding how the maths worked was critical in resolving conflicts arising from combining special relativity and quantum mechanics (things such as as negative energy and negative probability) and, in early June 1947, it also had the additional challenge of two problematic experimental results. Also - at a more fundamental level for plebs like me - the thinking behind the maths needs to provide a satisfactory framework for electrodynamics. It needs to answer things like: what does it mean when an electron and anti-electron annihilate each other (where do these particles disappear into)? What is the photon before it gets released in black body radiation (and how does it resolve then propagate when an electron drops down an energy level)? What does it really mean to be a wave and a particle (ie what really explains the measurement problem)? How can we think of a subatomic particle (like a photon or electron) and take it from something that just “is” to something that is more workable?


For a theory to survive it must be able to explain observations that are accurate and reproducible. 

The two problematic experiments presented at Shelter Island were the Lamb shift and another involving a deviation of the electron’s measured vs theoretical g-factor.

The theory:
Recall that the emission spectrum of hydrogen in the Bohr model represents the energy state of the electron as it drops from a higher energy “orbit” to a lower one. In quantum mechanics the “orbit” of the electron is its primary (or first) quantum number. Subsequent work by de Broglie, Heisenberg, Schrödinger, Pauli, and Dirac (but also many others) lead to the concept of a probabilistic “orbital” haze (I’ve skipped Heisenberg’s uncertainty principle but will get to that) instead of a classical planetary-style orbit for the electron and also introduced new concepts like the electron’s orbital shape, orbital orientation, and intrinsic spin direction (ie the other three quantum numbers). In theory how these qualities interact explain the minute complexities (energy levels) seen within spectral lines.

Theory applied to experimental observation:
Spectroscopy (the study of the interaction between electromagnetic radiation and matter - ie the emission and absorption of photons by matter) has a long and illustrious position in science. In particular, spectroscopy plays an enormous role the understanding and advancement of physics, chemistry and astronomy. It lead to Bohr’s model of quantised electron orbits for the hydrogen atom and also showed the model’s limitation in describing energy level transitions for atoms with more than one electron (ie it was rubbish for describing atoms from helium onwards). The quantum mechanical equations developed by Heisenberg et al (matrix mechanics) and Schrödinger (wave mechanics) and refined by Pauli, Born and others lead to an accurate and useful description for atomic spectra seen with more complex atoms, ions, and simple molecules. Dirac’s work took the quantum mechanical description a step further by placing it into a relativistic context and establishing the forth (electron spin) of four quantum numbers that describe the state of the electron. The interaction of the electron’s orbital (which creates a magnetic field) and the electron’s intrinsic magnetic dipole moment (ie its spin) explain the complexities of hydrogen’s spectral lines called its “fine structure” - ie this spin-orbit interaction modifies the electron’s energy level (given by its principle quantum number) in proportion to the combination of its orbital angular momentum and its spin angular momentum. This energy fluctuation is a tiny (by about a million) when compared to the electron’s orbital energy levels.

A side note: 
As it turns out, hydrogen - the darling of particle physics just as the fruit fly is the darling of genetic biology - also has a “hyperfine” structure which is the result of the interaction between, 1. the total magnetic moment of the electron (resulting from the spin-orbit interaction described above) and the magnetic moment of the nucleus and, 2. the electrostatic interaction between the electric quadrupole moment of the nucleus (ie the parameter that describes the effective shape of the charge distribution of the nucleus) and the electron. 



Shelter Island problem 1:
The Lamb shift describes an experimental result by Willis Lamb and Robert Retherford looking specifically at the energy level of the electron of hydrogen at the second orbital level with either one of two orbital shapes. One electron has an orbital shape like a sphere, the other like a dumbbell (notated by 2S1/2 and 2P1/2 respectively). The Dirac equation, for all its cleverness in explaining hydrogen’s fine structure (ie by taking into account of all four quantum numbers), derives identical energy levels for both of these electrons. This makes it no better than the Schrödinger equation nor, for that matter, the Bohr model (which only takes account of the orbital number). What the Lamb-Retherford experiment showed was a slight shift in the spectral line representing the different energy levels between these two electrons. 





Shelter Island problem 2:
To make things worse, Isador Rabi, the guy who won the Nobel prize in 1944 for his discovery of nuclear magnetic resonance (used in MRI machines), then stood up to present some interesting work by his colleagues at Columbia University. The experiment he presented showed that the measured g-factor of the electron (the g-factor being a dimensionless quantity of a particle/ nucleus/ atom that governs how it interacts with an external magnetic field) is slightly larger than that predicted by Dirac theory. The predicted g-factor for the electron is 2. The measured amount presented at the meeting was slightly larger at 2.00244.


This is the current g-factor for the electron (measurement top, predicted bottom).


In the spirit of this monologue it’s going to be much easier for me to give a fluffy explanatory answer for the Lamb shift and the electron’s g-factor than to sit down, nut it out, and explain it properly. If you want a proper mathematical explanation then you will need to get a proper physics textbook and an appropriate teaching institution. The maths for QED is incredibly complex and it’s application/ realisation has three important components: Feynman diagrams, perturbation theory, and renormalisation. An impression of the latter two of these concepts can be gained through PBS Spacetime:
and through Fermilab:
These two sources are also referenced for the Feynman diagrams below.
Another reference is Jim Baggott’s book, “Mass”. The relevant chapter is “12, Mass Bare and Dressed” but the whole book reads well and contains a lot of fine detail and a proper chronological arrangement that you won’t be seeing here (this excellent book targets a general audience with rudimentary/ high school understanding of physics - ie people like me).


Feynman diagram of electron repulsion through the transfer of a virtual photon.


So - with the aid of Feynman diagrams - I’ll get on with the fluffy explanation. The problem with the equations - apart from the fact that they were too difficult to solve for anything outside the simplest of controlled interactions - is that they didn’t take into account any other “possible” way an interaction could occur. Equations just take what we now understand to be the most straight forward solution. As it turns out, all possible interactions that can happen, do happen - just that some are more likely than others. Richard Feynman developed a remarkably effective way of capturing the essence of a QED equation through the use of space-time diagram called a Feynman diagram. As it turns out the number of vertices of each diagram (each representing a possible way an interaction can occur) decreases the probability of that interaction by ~100x. This allows a way of determining which interactions have more relevance, although all of them (of which there is an infinite number) matter. Another critical feature of a Feynman diagram is that the overall interaction described by a set of Feynman diagrams is defined by the particles going in and the particles going out. It is only these ingoing and outgoing particles that we observe and measure (such “on-shell” particles necessarily obey Einstein’s mass-energy equivalence - if you will, they are very real to us). 












With these modifications QED developed into a remarkably powerful and precise tool. If you want a relatively easy but blunt measure for an interaction you take the most obvious Feynman diagram to explain it. If you want a more precise answer then you need to include Feynman diagrams with more vertices including the more weird and improbable ones (eg virtual photons going backwards in time, virtual photons splitting to become electron-positron pairs which then recombine to restore the virtual photon, etc etc). The beauty of QED and Feynman diagrams is how simple it looks.. and the profound impression it makes on the nature of reality. The application of QED accurately accounted for such things as the Lamb shift, the electron’s g-factor, and the hyperfine structure of hydrogen (recall that hydrogen’s fine structure only needs the Dirac equation). In 1965 Sin-Itiro Tomonaga, Julian Schwinger, and Richard Feynman received the Noble prize for their work in developing QED.

Quantum field theory expanded from QED to Quantum Chromodynamics (QCD) which describes the strong force with the fermion (matter particle) being the quark and the boson (force particle) being the gluon. It can also be represented by Feynman diagrams in a manner similar to QED. QCD has not been quite the success of QED. And things have moved on yet again. 

So what exactly is a quantum “field”? A better way to ask this question is: how does quantum mechanics deal with the concept of a field? Classical mechanics considers a particle to have certain properties like position and momentum (mass x velocity) and these properties can then influence the space around that particle - ie to create a field (the classical EM field being an example). Quantum mechanics arose because of problems applying classical mechanics to observations that probed the the workings of the subatomic realm. Properties like position and momentum became difficult to pin down when challenged by observations which suggested that the entities under investigation show both particle and wave behaviour. Instead of classical mechanic’s particle, quantum mechanics has a wave function. Instead of using properties which fix an entity in space and time, quantum mechanics uses a mathematical tool called an operator. An operator is a function over a space of physical states to another space of physical states. Each measurement has its own quantum operator.




As you might have guessed, you can’t simply count the number of particles in the Standard Model and relate a field to each of them. Fields interact with each other (ie you can’t simply think of QED as an “electron field” + an “electromagnetic field” as they are interdependent) and what you consider to be a field depends on how you look at it, and, more importantly, what you need to use it for. Quantum fields are mathematical constructs used to solve problems and make predictions.




In Quantum Field Theory the fundamental fabric of the universe is the quantum field. The entity that we observe and measure is a vibration of that quantum field. Because of Heisenberg’s uncertainty principle (or rather, because of what the uncertainty principle tells us about the fundamental nature of reality) what seems impossible and in violation of our fundamental understanding of physics - eg particles and energy popping into existence out of nothing - is allowed so long as they occur within a time period that makes these virtual particles “immeasurable”. But that doesn’t mean their effect can’t be measured. In QFT a vacuum isn’t an absence of anything, it’s a throbbing, pulsing, vibrating nightclub of virtual particles. The overall energy content of a vacuum is zero but if you could take a conceptual snapshot of it you will see stuff that doesn’t appear to exist outside of a mathematical construct.


Computer simulation of quantum vacuum fluctuations.


The Casimir effect is a measure of the effect of this so-called “vacuum energy”. Put two uncharged plates close together in a vacuum and look very, very, very closely and you will see that they get pushed together as a result of the vacuum energy on the outside of the plates being greater than that between the plates.





Quantum physics is weird. But that’s proper weird.

If all this sounds like crazy, funky science then welcome to the club. For, if nothing else, quantum mechanics calls into question our intuition. 

Therein lies a problem. The equations that brought about QED and evolved into QFT as a whole are so far removed from daily experience as to be meaningless. Not meaningless in that we are unable to expand on it (for there is much to do) nor is it meaningless as there are real-world applications (in chemistry, computing, cryptography but also more obviously in applications like MRI scanners and PET scanners). It’s meaningless because the concepts contained within are impenetrable for all but a select few. So what? It’s just maths. And we can develop it and we can use it so where’s the problem? Why worry about what the equations actually mean?

Because it matters. It might not matter to bees or trees, kangaroos or chimps, but it matters to us. I’m no physicist and I’m an even worse philosopher but humans seem peculiar in their preoccupation with what it means to be and to exist. We worry about mortality. And happiness. About what it means to live a good life. About intentions as opposed to actions. About the “essence” of things. About guiding principles and beliefs. We care about individuality, belonging, social norms, and the common good. Actually, come to think of it, chimps probably care about some of these things too.. Anyway.. As the world rapidly changes and continues to do so at an ever-increasing rate we find a growing need to anchor ourselves to something. Spirituality. Religion. Metrics. Status. Material goods.. Whatever. 




Before you think that this is just intellectual posturing for people with too much time on their hands, it pays to recognise what we are dealing with. Let me give an obvious example. Underwear has an underlying principle. It has a purpose. The guiding principle of underwear is to provide coverage and support for the body part that applies to it. Skimpy lingerie defies the guiding principle of underwear. That’s because the underlying principle of lingerie is sexiness, seduction and desire. Lingerie has nothing to do with the prosaic and the practical. It goes far deeper into the human psyche than that. Knowing the difference is important.

What about other consumer goods? Gambling and addiction? Communication, information, and social media? .. Let me stop there and just say this. Life is complex. And evolves over time. If you will, an individual’s time line is the ultimate expression of sequential, time-dependant outcomes resulting from non-commuting dynamical variables. Whatever that means (whether you believe in free will, determinism, or sit somewhere in between), humans often feel the need for a roadmap to reference once in a while.




Ok, enough of that. 

We can now move on to Heisenberg’s uncertainty principle, the Copenhagen interpretation measurement problem, and Schrödinger’s cat.


Heisenberg’s uncertainty principle takes us a long way back from where we finished at QFT (and we are yet to talk about Plank’s Constant which takes us all the way back to the beginning). Sometimes it’s easier to discuss a topic by taking our understanding of where we are now and using that as a template for understanding some of the details that troubled us in the past (and still continues to trouble us). Quantum mechanics seems to be one of those topics.

Heisenberg’s uncertainty principle is the relationship of how much we can know about two associated properties that don’t commute. It is an information limit. Werner Heisenberg came across the relationship in February of 1927. It all started in 1925 with Heisenberg looking for a different way of understanding the energy level of electron orbits. Instead of accepting the Bohr model of quantised orbits which had never been observed (and failed to explain the spectra of larger atoms and molecules) Heisenberg took Bohr's correspondence principle and manipulated the equations so they involved only quantities that were directly observable. With the help of Max Born and Pascual Jordan this developed into matrix-based quantum mechanics published in 1926. Within the year Erwin Schrödinger developed his wave equation. Much to Heisenberg’s annoyance the physics community took a far greater liking to Schrödinger’s way of doing things. Nonetheless Heisenberg’s matrix mechanics and Schrödinger’s equation (wave mechanics) are both novel tools used to probe the non-commuting properties of the electron (an entity shown to have both particle behaviour and wave behaviour). These statistical tools provide a probability distribution of all possible outcomes for these values. Max Born then proposed that we interpret Schrödinger’s wave function as a “probability amplitude” the square of which gave a prediction of the location of an electron.








But it is the mathematics of matrix mechanics that reveals the uncertainty principle (well, at least it seems to have made the concept more apparent to Heisenberg). In the mathematics of matrices it is not always the case that a x b = b x a. Heisenberg’s insight was that pairs of variables that don’t commute - such as position and momentum, or energy and time - can be connected through an uncertainty relationship. 








In a thought experiment Heisenberg imagined trying to measure the position of an electron by bouncing a photon off it. The higher the frequency of the photon, the greater the resolution and thereby the greater the accuracy in determining the electron’s position. But the higher the frequency then the greater the energy imparted by the photon on the electron therefore the less we can know about the electron’s momentum. Heisenberg’s take on the uncertainty principle was that the act of observation changes the event being observed.

This was not Niels Bohr’s take on it. Bohr felt that Heisenberg’s formulation was correct, but the interpretation less so. Bohr felt that uncertainty was in itself a feature of electrodynamics. Heisenberg sent his paper to Bohr prior to publication and it appears that it was Bohr’s reading of it that developed Bohr’s principle of complimentarity. In any case the work was published and Bohr was able to convince Heisenberg of the fundamentally probabilistic nature of quantum measurement. In October 1927, at the Fifth Solvay International Conference on the Electron and Photon, Werner Heisenberg and Max Born concluded that the quantum revolution was complete and nothing further was needed. This lead to the formalisation of two very different viewpoints and, subsequently, to the famous public debates between Niels Bohr and Albert Einstein (the latter known as the Bohr-Einstein debates) culminating in the EPR paradox eight years later.





So what exactly is a measurement? This is an immensely difficult question to answer. Questions like this have philosophers, physicists, information theorists and many others discussing and arguing since the seventies over the past century for a very long time. It suffices to say that you won’t find the answer here. But, for our purposes, we can go back to the double slit experiment and the measurement problem

Recall that the double slit experiment is where photons or electrons (subsequently atoms and small molecules) are streamed through two slits. The interference pattern on the detector screen beyond the two slits is evidence for the wave nature of these entities. The interesting thing is that even if we send each photon or electron one at a time, and record where they strike one at a time, the interference pattern still - eventually - emerges. So what’s going on here? The only way you can get an interference pattern with a sequence of individual entities is for each entity to interfere with itself. That is to say that each entity is conceived as a wave that goes through both slits. This is known as a superposition*. The measurement problem arises when we actually make a measurement to determine which slit the entity passes through (ie we resolve the entity passing through one of the two slits). Once we do that - ie once we interfere with the coherence of the quantum system - then we observe only two strike areas on the detector screen corresponding to the two slits. This loss of coherent quantum behaviour is called quantum decoherence. 

*Quantum superposition is the conundrum of Schrödinger’s cat.

The measurement problem is the manifestation of Heisenberg’s uncertainty principle with the complimentary variables (= conjugate variables) under scrutiny being position and momentum. That is to say: we can’t precisely measure both the position (which of the two slits has been traversed) and the momentum (the interference pattern on the detector screen) of the photon or electron.




The question and response in the link above is enlightening enough to be cut and pasted below. Although it should be recognised that this is just the opening salvo..






A discussion on measurement is where the spoken word and the language of mathematics [appear to] conveniently converge. A description of any entity or event forces a solution. That is to say a description resolves entities and events into terms which we should be able to relate to. A description cannot convey every aspect of what it is supposed to describe but it means we can convey and discuss such things even if we agree, disagree, stay neutral or remain uninvolved. Maybe a measurement is nothing more than a mathematical description? That might not be true but say we run with that [I’m not going into the whole epistemic/ ontic argument which gets confusing rather quickly - possibly because it often leads to an argument on semantics rather than substance. To proceed I have to pick a side and this one makes sense to me. Moving on..]. A measurement then forces a solution for entities and events with the structure of mathematics providing a singular perspective that everyone should agree upon (at least for the quality that has been measured). Measurements in the quantum realm resolve entities and events that occur in that space into terms that make observational sense (ie it forces a solution which we understand as quantum decoherence). A photon or electron have measurable qualities that are resolvable (eg a position or momentum eigenstate) but forcing that solution leads to the “collapse of the wave function”. 


So, in this interpretation, the measurement problem can be considered in terms of the description/ forced solution of quantum wave functions. In a tangible world that breathes and gossips a description doesn’t necessarily change the status of the entity or event being described. In the world of quantum mechanics the description or “fixing” of an eigenstate decoheres the system under investigation. But, crucially, the combined wave function of the system (under investigation) and the environment (that did the investigation) continues to obey Schrödinger’s wave equation. Note then that the measurement problem described above is simply an interesting observational phenomenon as quantum coherence for the system “steps down”. With measurement at the double slits decoherence occurs at the level of the slits with fixation of position eigenstates. Without measurement at the slits decoherence occurs at the detector screen with the observation of the interference pattern (ie measurement of the momentum eigenstates - or, in other words, the interference pattern of the [non-indexable] probability distribution of momentum). 

Heisenberg’s uncertainty principle for conjugate variables also include the conjugate of energy and time. The uncertainty relationship between these two qualities “allows” the existence of virtual particles (in the form of energy fluctuation) so long as they exist for a duration (a minuscule time period) that complies with the limits of the equation. The observation of this “vacuum energy” is the Casimir effect described above.

We can now move on to the EPR paradox, entanglement, and quantum non-locality.

The Einstein-Podolsky-Rosen paradox is a thought experiment to challenge Bohr and those that shared a similar position on the interpretation of quantum mechanics. It was published in the May 15, 1935 issue of Physical Review and provocatively titled “Can Quantum Mechanical Description of Physical Reality Be Considered Complete?” This prompted Bohr to respond in the same year, in the same journal, using the same title. Although the papers reference the structure of the maths at the core of quantum mechanics (wave equations with probabilistic outcomes and the uncertainty relationship of complimentary variables) the heartfelt question being probed by Einstein was the underlying nature of reality. The position of Einstein and co. is that reality is unitary and causal and thereby predictable with observers having or taking specific vantage points. The position of Bohr and co. is that reality has a foundation in probabilities with uncertainty relationships that “collapse” under observation (formalised as the Copenhagen Interpretation). There are problems with both these positions. Current research in quantum information science (ie quantum computing/ cryptography/ information theory/ teleportation and entanglement) and in nascent fields like quantum biology and nanoscale thermodynamics may - or may not - provide further clarification.





As with anything, one must be careful that trying to understand something complex with limited resources one doesn’t merely step up from a lower level of obfuscation to a higher level of obfuscation. With that said this is where we are going. 

At the centre of the EPR paradox is what we now recognise as quantum entanglement. At the time this was a theoretical problem presented by the maths that didn’t have an easy explanation (and still doesn’t). Subsequent experiments have realised such entangled relationships which vindicates the maths but doesn’t explain it. Entanglement is a quantum effect where the superposition of two “entangled” entities resolve when the state of one entity is pinned down. The entangled entities have to be coherent to maintain their state of superposition with decoherence occurring when one of the two entities is resolved. One example of entangled entities is the creation of an electron-positron pair from a photon (the photon’s decomposition being the opposite of electron-positron annihilation with the release of gamma radiation). The wave function that describes the electron-positron pair is preserved until one or the other is measured. There are other theoretical and experimental ways to create entangled entities and even ways to send such entities far away from each other without violating their quantum cohesion (thereby preserving their combined wave function). Decoherence of the system occurs when one of the entities is measured. If that measurement takes place at a site far away from where the other entity has been sent the decoherence phenomenon is taken to be evidence for quantum “non-locality” or what Einstein called “spooky action at a distance”.




This was problem that not only troubled Einstein but a large section of the physics community as well. If information has a speed limit (ie the speed of light) then the instantaneous collapse of the wave function for entangled entities at distant sites means either special relativity is flawed or quantum space is nonlocal. Nothing is considered sacred in science but there are some things humans like to accept as unshakeable truths so they can get on with living their lives. Regardless of creed and culture most adults would treat absolute space, time, and locality as canon. It is, after all, the most reproducible part of everyday life. In the early 20th century physicists understood that the maths of special relativity could also reproducibly explain and predict outcomes seen in the real world while dispensing with absolute space and time. Locality (position within a given space-time reference), however, remained unchallenged. Bringing together the maths that brought about Schrödinger’s wave equation and Heisenberg’s uncertainty principle and extrapolating beyond suggested seemingly impossible outcomes culminating in the Bohr-Einstein debates and, ultimately, the EPR paradox. Solutions had to be found. The spectre of Einstein’s “hidden variables” confounding the maths and experiments have failed the litmus test of Bell’s inequalities on numerous occasions but, for some holdouts, it is yet to be put to bed. That observables and measurement is key to pinning down reality also has many proponents. Nonetheless, the smarts who spend a lot of time on these things, have moved away from the original positions of Einstein and Bohr.. to interpretations that border on psychedelic..


Chill man.


So, how does this play out to a human being with a brain large enough and complex enough to contemplate the meaning and nature of a conscious state? I’d suggest - not very well. For a start, locality and a directional timeline is axiomatic to Life itself.

This is where Planck’s Constant comes in. 

I’ll try to be brief.. 

Electromagnetic radiation generated by the jiggling around of particles that make up matter is called thermal radiation. All matter at temperatures above absolute zero have thermal motion of their constituent particles and thereby emit thermal radiation. As a body gets hotter it glows in a manner corresponding to the higher frequency (decreasing wavelength) of the maximum EM spectrum radiating from it. A human has a body temperature of 37°C (~310 Kelvin) which means we emit mostly low frequency infrared photons (which is why night vision goggles are tuned to the infrared spectrum). The surface of the sun is ~5800 K and emits light mostly in the yellow-green part of the EM spectrum. Rigel A, the largest of the three stars that create the brightest point in the constellation of Orion, has a surface temperature of ~12000 K and emits photons mostly in the ultraviolet range. 

Thermal radiation represents the conversion of thermal energy into electromagnetic energy. A nearby object absorbs some of that energy and reflects the rest. An idealised, perfect absorber of incident radiation is called a blackbody. At thermodynamic equilibrium an object emits all the radiation it absorbs. A blackbody at thermodynamic equilibrium therefore emits 100% of the incident radiation. This is called blackbody radiation.




            

In the final years of the 19th century Max Planck was commissioned to investigate possible ways of making a light bulb more efficient. Light bulbs, like all bodies, emit a range of EM frequencies and the problem that Planck had to tackle was how electric companies could maximise the production of light from their bulbs while minimising the production of heat. At the time Planck was a professor of theoretical physics at the Humboldt-Universität zu Berlin with a background in thermodynamics. Planck’s attention was drawn to blackbody radiation and he soon came up against the problem of explaining the blackbody radiation curve.

Classical physics treats EM radiation as a wave and back in then there was no reason to think otherwise. Inside the cavity radiator (the experimentalist’s simulation of an idealised blackbody) at thermodynamic equilibrium the EM wave either transfers its energy to the material in the wall of the cavity radiator or is itself the transfer of energy away from the wall of the cavity radiator. The blackbody radiation curve is the observation of the emitted radiation. At 4000 K the blackbody radiates mostly visible red light, at 5000 K mostly yellow, at 5500 K mostly green. 



Graph of radiation intensity vs wavelength for temperature T.


Before Planck there were two mathematical approximations for the blackbody radiation curve. The Wien distribution law which worked well at describing the short wavelength (high frequency) end of the emission spectrum and the Rayleigh-Jeans law that worked better for the long wavelength end of the spectrum. Planck initially worked on the Wien distribution law but could not make it match experimental observation. He then took the model used in the Rayleigh-Jeans law with the assumption that the radiation is in equilibrium within the walls of the cavity. To make it fit the observed radiation curve he had to rely on a novel interpretation of the second law of thermodynamics proposed by Ludwig Boltzmann who had introduced statistical mechanics to explain entropy. Planck’s insight was to make the atomic oscillations within the walls of the cavity radiator contingent on a minimum quantum of energy. This was a mathematical fudge that Planck did not like (not least because he had to accept Boltzmann’s idea that entropy was a statistical result rather than a fundamental law of nature). But the fudge worked. 


The Planck postulate is E = nhv (where n is an integer).




Graph of radiation intensity vs frequency for temperature T = 5800K.


Planck’s radiation law was formulated in 1900 and quantisation was conceived as a property of the harmonic oscillators (ie the vibrating atoms) within the walls the cavity radiator. It was a mathematical quirk. The person that showed the packets of energy to be the quantisation of light (what we now call photons) in 1905 was none other than a guy named Albert Einstein. 

Planck’s constant (h) or the reduced Plank’s constant (ћ = h/2π) turns up in just about every equation in quantum mechanics. 

It is a vitally important but reassuringly small number.