Quantum entanglement

A non-physicist friend expressed deep puzzlement about measurements at opposite ends of the Universe being somehow linked. Here I describe what I take to be the current state of theoretical and experimental knowledge about the “spooky action at a distance” that bothered Einstein and many other people.

The aspect of quantum mechanics that is pretty widely known and accepted is that small objects (atoms, electrons, nuclei, molecules) have “quantized” properties and that when you go to measure one of these quantized properties you can get various results with various probabilities. For example, an electron can have spin “up” or “down” (counterclockwise or clockwise rotation as seen from above; the Earth as seen from above the North Pole rotates counterclockwise, and we say its spin is up if we take North as “up”). The Earth, being a large classical object, can have any amount of spin (the rate of rotation, currently one rotation per 24 hours). The electron on the other hand always has the same amount of spin, which can be either up or down.

Pass an electron into an apparatus that can measure its spin, and you always always find the same amount of spin, and for electrons not specially prepared you find the spin to be “up” 50% of the time and “down” 50% of the time. (It is possible to build a source of “polarized” electrons which, when passed into the apparatus, always measure “up”, but the typical situation is that you have unpolarized electrons, with 50/50 up/down measures.) It is a fundamental discovery that with a beam of unpolarized electrons it is literally impossible – not just hard, but impossible – to predict whether the spin of any particular electron when measured will be up or down. All you can say is that there is a 50% probability of its spin being up. It’s also possible to prepare a beam of partially polarized electrons, where for example you know that there is a 70% probability of measuring an electron’s spin to be up, but that’s all you know and all you can know.

So much for a review of the probabilistic nature of measuring a quantized property such as spin for a single tiny object. Next for the aspect of quantum mechanics that is less widely appreciated, which has to do with measures on one of a group of tiny objects. A simple case is two electrons that are in a “two-particle state”, where one can speak of a quantized property of the combined two-particle system. For example, in principle it would be possible to prepare a two-electron state with total spin (“angular momentum”) zero, meaning that electron #1 could be up, and electron #2 would be down, or vice versa. As a matter of fact, it was only in the last few decades that experimental physicists learned how to prepare such multiparticle states and make measurements on them, and it is these experiments, together with superb theoretical analyses, that have clarified the issues that worried Einstein. (Actually, most experiments have involved photons rather than electrons, but I’ve chosen the two-electron system as being more concrete in being able to make analogies to the spinning Earth.)

Suppose Carl prepares a zero-spin electron pair and gives one electron to Alice and the other to Bob (Alice and Bob are in fact names used in the scientific literature to help the reader keep straight the two observers.) Alice and Bob keep their electrons in special carrying cases carefully designed not to alter the state of their electron. They get in two warp-speed spaceships and travel to opposite ends of our galaxy or, if one prefers, to opposite ends of the Universe (if the Universe has ends). Many years later, Alice measures the state of her electron and finds that its spin is up (there’s an arrow pointing up on her carrying case indicating what will be called “up”, and a similar arrow pointing up on Bob’s carrying case). If Bob measures his electron, he will definitely find its spin to be down.

One might reasonably interpret these observations something like this: Carl happened to give Alice an “up” electron and (necessarily) gave Bob a “down” electron. There was a 50/50 chance of giving Alice an up electron, and this time Carl happened to give her an up electron. Then of course no matter how long Alice waits before measuring her electron, she’s going to find that it is “up”, and no matter how long he waits Bob is going to find that his electron is “down”. Yes, there are probabilities involved, because neither Carl nor Alice knows the spin of the electron until Alice makes her measurement, but the electron obviously “had” an up spin all the time.

The amazing truth about the Universe is that this reasonable, common-sense view has been shown to be false! The world doesn’t actually work this way!

Thanks to major theoretical and experimental work over the last few decades, we know for certain that until Alice makes her measurement, her electron remains in a special quantum-mechanical state which is referred to as a “superposition of states” – that her electron is simultaneously in a state of spin up AND a state of spin down. This idea is very hard to accept. Einstein never did accept it. In a famous paper in the 1930s, he and a couple of colleagues proposed experiments of this kind and, because quantum mechanics predicts that the state of Alice’s electron will remain in a suspended animation of superposed states, concluded that quantum mechanics must be wrong or at least incomplete. It took several decades of hard work before experimental physicists were able to carry out ingenious experiments of this kind and were able to prove conclusively that, despite the implausibility of the predictions of quantum mechanics, quantum mechanics correctly describes the way the world works.

I find it both ironic and funny that Einstein’s qualms led him to propose experiments for which he quite reasonably expected quantum mechanics to be shown to be wrong or incomplete, only for it to turn out that these experiments show that the “unreasonable” description of nature provided by quantum mechanics is in fact correct. These aspects of quantum entanglement aren’t mere scientific curiosities. They lie at the heart of work being done to implement quantum computing and quantum encryption.

What about relativity, and that nothing can travel faster than light? Not a problem, actually. The key point is that Alice cannot send any useful information to Bob. She cannot control whether her measurement of her electron will be up or down. Once she makes her “up” measurement, she knows that Bob will get a “down” measurement, but so what? And all Bob knows when he makes his down measurement is that Alice will make an up measurement. To send a message, Alice would have to choose to make her electron be up or down, as a signal to Bob, but the act of forcing her electron into an up or down state destroys the two-electron “entangled” state.

I recommend a delightful popular science book on this, from which I learned a lot, “The Dance of the Photons” by Anton Zeilinger. Zeilinger heads a powerful experimental quantum mechanics group in Vienna that has made stunning advances in our understanding of the nature of reality in the context of quantum mechanics. In this book he makes the ideas come alive. The book includes detailed discussions of Bell’s inequalities and much else (Bell was a theoretical physicist whose analyses stimulated experimentalists to design and carry out the key experiments in recent decades).

It seems highly likely that Zeilinger will get the Nobel Prize for the work he and his group have done. A charming feature of the book is that Zeilinger is very generous in giving credit to many others working in this fascinating field. Incidentally, there is some movement in the physics community to bring contemporary quantum mechanics into the physics major’s curriculum, which in the past has been dominated by stuff from the 1920s.

Bruce Sherwood

Posted in Uncategorized | 15 Comments

The Feynman Lectures as textbook

As a young professor at Caltech I was assigned to teach intro physics (1966-1969), and I had the great good fortune to teach intro physics using the “Feynman Lectures on Physics” as the textbook. This experience had a huge impact on me. One of the effects was that I left experimental particle physics to work on university-level physics education, first in the PLATO computer-based education project at UIUC, and later at Carnegie Mellon and NCSU. Ruth Chabay used the Feynman book in an undergraduate course at the University of Chicago, and it was a major influence on us in the writing of the “Matter & Interactions” textbook.

Several times at public gatherings of physicists I have heard the claim that the Feynman course at Caltech was a failure, and I have always seized the opportunity to rebut these claims from my own experience. One of the things I’ve pointed out is that at the time he gave the original lectures there was no textbook, nor were there problems keyed to the lectures, whereas by the time I lectured in the course there was a lot of infrastructure, including the book. I also point out that in a traditional intro course students don’t understand everything, and ask the audience whether it is better to understand part of a traditional textbook or part of Feynman. In my judgment the course was a success in the late 60’s at Caltech, not a failure. When I moved to UIUC in 1969, I judged that it would have worked in an honors course there.

Matthew Sands with Robert Leighton translated Feynman’s unique spoken word into print. In his memoir “Capturing the Wisdom of Feynman”, Physics Today, April 2005, page 49, Sands provided confirmation for my own viewpoint. Feynman’s own assessment in the preface that it was a failure has helped perpetrate the notion that it didn’t work, and I was glad to learn from Sands’ memoir that this was off-the-cuff, not a carefully considered judgment. Moreover, Feynman’s view was not shared by others who were involved in teaching the course. As he says in the preface, “My own point of view — which, however does not seem to be shared by most of the people who worked with the students — is pessimistic”.

Kip Thorne has written some commentary on the history of the Lectures:

http://www.basicfeynman.com/introduction.html

Lawrence Krauss’s excellent scientific biography “Quantum Man: Richard Feynman’s Life in Science” also discusses the Feynman course, and what he says is consistent with the views of Sands and me.

Bruce Sherwood

Posted in Uncategorized | 3 Comments

The Higgs boson and prediction in science

An aspect of the discovery of the Higgs boson to celebrate is the possibility of prediction in science — in this case, the prediction that a certain particle should exist so that the world can behave as it does, and even a prediction of its approximate mass, which made it possible to design an accelerator (the Large Hadron Collider) that could accelerate protons to a high enough energy to be able, in collisions with nuclei, to produce the predicted particle if it has the predicted mass. The accelerator was built, the experimentalists looked, and they found something at the right mass. They will study the reactions that particle has, and the ways it decays (falls apart into other particles), to try to pin down whether it in fact has the right properties besides the right mass to be the Higgs boson.

There are some other examples of predicting and finding previously unsuspected particles.

In the 1860s, building on preliminary, partial work by others, Mendeleev was able to bring order to all the known elements in his famous periodic table. Moreover, he correctly interpreted holes in his table as representing elements that had yet to be discovered. For example, he not only predicted the existence of germanium but also predicted its approximate atomic weight and chemical properties, and he was right. In all, he correctly predicted 8 elements that were unknown at the time. The Wikipedia article shows his debt to the ancient Sanskrit grammarian Panini, who had recognized similar kinds of order among the sounds of human speech.

At the time, no one, including Mendeleev, had any way to explain the ordering of the elements made manifest in the periodic table. It was 40 years later that Rutherford and his coworkers discovered that atoms consist of a tiny, extremely dense positively-charged core (the “nucleus”) surrounded by negatively charged electrons (which had recently been discovered by Thompson). A few years later experiments showed that the order of elements in the periodic table simply reflects the number of electrons in the atom (1 for hydrogen, 2 for helium, etc.).

In 1928 Dirac created the famous “Dirac equation”, constituting a version of quantum mechanics that is consistent with special relativity (the earlier Schrodinger equation is not consistent with relativity, though it remains useful in the nonrelativistic limit). An odd feature of the Dirac equation was its prediction of electron-like particles with negative energy, which led Dirac with some reluctance to predict the discovery of an “anti-electron”, an electron-like object with positive charge. The positron was soon found by Carl Anderson at Caltech, with the predicted properties.

In the 1920s there was a puzzle in “beta decay”, in which a nucleus emits an electron (and metamorphoses into a nucleus with one additional positive charge; see my post on neutron decay). The puzzle was that the energies of the parent and daughter nuclei were known (from their masses) to be fixed quantities, but the electron was observed to have a broad range of energies, not simply the difference of the two nuclear energies. This was an apparent violation of the well established principle of energy conservation. There were suggestions that perhaps energy is not conserved in nuclear interactions, but Pauli could not accept that. In 1930 he proposed that the electron is not the only particle emitted in beta decay, that there is also another particle emitted but not observed. This implied that the unseen particle must have no electric charge, as otherwise it would be easily detected, and in fact it must also not interact with nuclei through the “strong interaction”, because again this would make the neutrino easily detectable. Also, the maximum observed energy of the electron was experimentally found to be about equal to the energy difference of the parent and daughter nuclei, which implied that the unseen particle must have very little mass. Pauli had predicted what is now called the neutrino, with specific properties: no electric charge, very small mass, no strong interactions.

The neutrino was observed directly only much later, in 1956, when Reines and Cowan placed detectors behind thick shielding next to a nuclear reactor at the Svannah River Plant in South Carolina. Neutrino reactions are very rare, but the flux of neutrinos was so large that occasionally the experimenters observed “weak” interactions of the neutrinos with matter. The properties of the neutrino matched Pauli’s predictions.

In the early 1960s Gell-Mann and Ne’eman independently were able to classify the large zoo of “elementary” particles into groups of octets and decuplets. There was a decuplet (of 10 particles) arranged in a triangle, like bowling pins, in which the particle at the point was unknown. Gell-Mann was able not only to predict the existence of this particle, which he called the Omega-minus, but he also predicted its charge and mass. A hunt for the Omega-minus was successful, and it had the predicted properties.

As was the case with Mendeleev’s periodic table, at first there was no explanation for the “why” of octet and decuplet groupings of the known particles. Soon however Gell-Mann and Zweig independently proposed that each “baryon” (heavy) particle was made of 3 “quarks” with unusual fractional electric charges, and each “meson” was made of a quark and antiquark. At first somewhat controversial, intense experimental work and closely related theoretical work by Feynman made it clear that the quark model does indeed explain the “periodic table of the particles”.

The creation of antiprotons occurred in a context very similar to the creation of the Higgs boson. The Berkeley Bevatron was a particle accelerator built in 1954, designed to accelerate protons to an energy sufficient to produce antiprotons if, as everyone predicted, the antiproton would have the same mass as a proton (but negative charge). This design criterion was similar to the design consideration for the Large Hadron Collider, that of acclerating protons to an energy large enough to create Higgs bosons. Because by 1954 many particles were known to have antiparticle partners, it was not a surprise when antiprotons were indeed produced by the Bevatron.

I’ve listed some major predictions that were successful. However, it seems to me that “postdiction” is more common. For example, no one predicted that the rings of Saturn can be braided. When spacecraft first returned closeups of the rings, scientists were startled to see braided rings. A lot of work went into understanding these unusual structures (the key turned out to be the role of small “shepherd” moons).

Bruce Sherwood

Posted in Uncategorized | Leave a comment

Neutron decay

Neutrinos undergo only weak interactions, which are associated with slow decays. For example, a neutron (electric charge 0) outside a nucleus (a “free” neutron) decays due to the weak interaction into a proton (electric charge +1), an electron (electric charge -1), and an (anti)neutrino (electric charge 0) with a mean life of about 15 minutes:

n \Rightarrow p^+ + e^- + \overline{\nu}

At the other extreme, decays associated with the strong/nuclear interaction typically have a lifetime of about the time required for light to move the distance of a diameter of a proton, about (3 × 10–15 m) / (3 × 108 m/s) = 1 × 10–23 seconds. Any decay that takes a lot longer than 1 × 10–23 seconds is typically an indication that it is associated with the weak interaction (though some “electromagnetic” decays may also have long mean lives).

For example, the positively charged pion decays into a low-energy positive muon (a heavy positron — a heavy anti-electron) and a low-energy neutrino with a mean life of about 25 nanoseconds (25 × 10–9 seconds), a time vastly longer than 1 × 10–23 seconds, and this is a weak decay. (The energies are low because the pion mass is only slightly larger than the muon mass).

\pi^+ \Rightarrow \mu^+ + \nu

It is pion decay that is the major source of neutrinos made in accelerators. The pions are made at high energy and move at high speed, with the result that the neutrinos emitted in the direction of motion of the pion get thrown forward with high energy. This is the mechanism for producing copious beams of high-energy neutrinos. There are low-energy neutrinos produced by nuclear reactors and by fusion reactions in our Sun.

One can say that there are three classes of fundamental particles: (1) particles made of quarks, called hadrons, including protons, neutrons, and pions, (2) particles that are not made of quarks, called leptons, including electrons, muons, and neutrinos, and (3) “glue” particles that mediate interactions among particles (for example, the photon mediates electromagnetic interactions). There exist purely leptonic interactions and decays, such as muon decay into electron, neutrino, and antineutrino, with a mean life of about 2 microseconds (2 × 10–6 seconds, a long time in this scheme of things). There also exist semileptonic weak interactions such as neutron decay, in which the neutron and proton are hadrons but the electron and antineutrinos are leptons. Similarly, in pion decay the pion is a hadron but the muon and neutrino are leptons. There is a beautiful picture that unifies these various kinds of interactions, having to do with the exchange of particles.

The modern quantum field theory view of electron-electron repulsion is that one electron emits a (“virtual”) photon, with a change of energy and momentum by the emitting electron, and this (“virtual”) photon is absorbed by the other electron, so the energy and momentum of this electron also change. The electron is called “virtual” because it is not directly observable and can have a relation between energy and momentum that is not possible for a real photon. Photon exchange is considered to be the fundamental basis for electromagnetic interactions.

Similarly, remarkably similarly, weak interactions such as muon or neutron decay can be modeled as the exchange of positive or negative W particles. In this view, the free neutron decays into a proton and a W (charge is conserved: the neutron has no electric charge, the proton has +1 unit of electric charge, and the W has -1 unit of electric charge). Next, the W decays into an electron and an antineutrino (“lepton” number is conserved: the W has lepton number zero, the electron has lepton number +1, and the antineutrino has lepton number -1).

n \Rightarrow p^+ + W^-

W^- \Rightarrow e^- + \overline{\nu}

An even more fundamental picture of neutron decay is that a quark with charge -1/3 in the neutron emits a W and changes into a quark with charge +2/3, a net change of +1, corresponding to the neutron changing into a proton, but with a change that’s actually associated with the change of just one of its quarks; the other two quarks are mere spectators in the process.

q^{-1/3} \Rightarrow q^{+2/3} + W^-

W^- \Rightarrow e^- + \overline{\nu}

Here is an animation of this picture of neutron decay.

A key concept is the interaction “vertex”: In electron-electron repulsion, one vertex is the point where one of the electrons emits a photon, and the electron and photon paths diverge. A second vertex is where the other electron absorbs the photon and changes its direction. “Feynman diagrams” are little pictures of these vertex interactions. In neutron decay one vertex is the quark – quark – W, and a second vertex is W – electron – antineutrino.

Consider positive pion decay. The positively charged pion with charge +1 consists of a quark with charge +2/3 and an antiquark of charge +1/3. One vertex is quark – antiquark – W+, and the other vertex is W+ – muon – neutrino.

q^{+2/3} + \overline{q}^{+1/3} \Rightarrow W^+

W^+ \Rightarrow \mu^+ + \nu

Consider the purely leptonic decay of a negative muon. One vertex is muon – W – neutrino, and the other is W – electron – antineutrino.

\mu^- \Rightarrow W^- + \nu

W^- \Rightarrow e^- + \overline{\nu}

Feynman diagrams consist of interaction vertices with exchanges of virtual particles such as the photon (electromagnetism) or the W (weak interactions). In fact, the first big unification was the recognition that electromagnetism (photon exchange) and weak interactions (W exchange) were basically the same thing, which is called the “electroweak” interaction.

Summary: Weak interactions involve interaction vertices that include the W+ or W, and they are slow compared to strong/nuclear interactions. W’s “couple” to quarks, and they also “couple” to leptons, hence such “semileptonic” phenomena as neutron decay, where one vertex is quark – quark – W and the other vertex is W – electron – antineutrino.

I should mention that in addition to the photon, W+, and W, there is also an electrically neutral Z particle that is exchanged in certain kinds of weak interactions where no change in electric charge occurs at a vertex. Also, when a W decays into an electron and an antineutrino, the antineutrino is an electron-type antineutrino, whereas when a W decays into a (negative) muon and an antineutrino, the antineutrino is a muon-type antineutrino. The neutrinos and antineutrinos associated with positrons and electrons are different from the neutrinos and antineutrinos associated with positive and negative muons.

Bruce Sherwood

Posted in Uncategorized | 1 Comment

Work and energy for an accelerating car

Someone wrote to me with questions about work vs pseudowork (extended system vs point-particle system). At one point he asked me to analyze a subsystem of an accelerating car, the car minus the wheels, and I learned something in the process that’s worth sharing. It’s yet another example of the need to be careful about the fact that in calculating work it’s critically important to pay close attention to the individual displacements of the points of applications of each force, not simply use the displacement of the center of mass.

A car moves with constant acceleration a in the +x direction, with no tire slippage. Simple model: there’s a forward static friction force f applied by the road to the bottom of each tire, where the instantaneous speed is zero, and therefore this force does no work on the bottom of the wheel. Total mass of car is M, mass of each wheel is m, each wheel is like a bicycle wheel, with all mass at the rim of radius R so the moment of inertia of the wheel is mR^2. From the Momentum Principle we have Ma = 4f. No work is done on the car, so \Delta E_{\text {car}} = 0, where \Delta E_{\text {car}} = \Delta K_{\text {trans}} + \Delta E_{\text {int}}, where \Delta K_{\text {trans}} = \frac{1}{2}mv_{\text {cm}}^2 is the translational kinetic energy of the car and \Delta E_{\text {int}} includes kinetic energy of wheels, pistons, camshaft, chemical energy of gasoline, thermal energy of engine block, etc. For a point-particle system subjected to the same forces, \Delta K_{\text {trans}} = 4fd, where d is the distance the car moves, and this is also the translational kinetic energy of the actual car.

Now consider a system I’ll call “sys” consisting of the car minus the wheels, with mass M-4m. In this simple model, suppose the engine pushes on the top of each wheel. When I was a kid there were little electric motors you could mount on the top of the front wheel of a bike. The motor turned a small wheel in contact with the big wheel, to drive the big wheel. Or you could imagine an arm or arms continually pushing the top of the wheel, then being lifted and retracted. What are the energetics of “sys”?

Start by analyzing the wheel alone, which is acted upon by the force f of the road, the force of the car axle on the inside of the hub of the wheel, and the force of the engine along the top of the wheel. By determining these forces we get by reciprocity of electric forces the forces the wheel exerts on “sys”. In the +x direction the engine exerts a force +f_2 and the axle exerts a force -f_3 (the engine force pushes the hub of the wheel against the car’s axle). Remember that Ma = 4f, so f = Ma/4.

Momentum Principle: ma = f + f_2 - f_3

Angular Momentum Principle (-z direction): I\alpha = Rf_2 - Rf = mR^2(a/R) = Rma, or ma = f_2 - f, and f_2 = ma + Ma/4 = (m+M/4)a

From the Momentum Principle we have ma = Ma/4 + (m+M/4)a - f_3 , so f_3 = Ma/2 , which is twice as large as f (which is a bit surprising)

Summary for wheel: The wheel is acted upon by the road, f = Ma/4 in the +x direction, the axle, f_3 = Ma/2 = 2f in the -x direction, and the engine, f_2 = (m+M/4)a in the +x direction. The engine force is only slightly larger than f, by the amount ma which is small compared to f = Ma/4, since the wheel mass m is very small compared to the large mass M of the car. It’s interesting that the force of the axle on the wheel is so large, twice as big as f. Of course if the mass of the wheel is distributed differently the values of f_2 and f_3 will be different, related to I not being simply mR^2.

Now we can look at “sys”, the system consisting of the car minus the wheels, with mass M-4m. The forces in the +x direction acting on “sys” are the forces due to the hubs of the wheels, +4\times 2f = 8Ma/4 = 2Ma, and the forces due to the tops of the wheels on the engine, -4\times f2 = -4(m+M/4)a.

Momentum Principle: (M-4m)a = f_3 - f_2 = 2Ma - 4(m+M/4)a = Ma-4ma = (M-4m)a, which checks.

What about the Energy Principle for “sys”? Here’s the element in the analysis that I found particularly interesting. The forces of the hubs of the wheels are applied to the axle, which in a small time dt moves through a small displacement vdt, where v is the instantaneous speed of the car. But the forces of the tops of the wheels on the engine act through a distance 2vdt ! The instantaneous speed of the top of the wheel is 2v (speed of hub is v, speed of bottom of wheel is zero).

Energy Principle: \Delta E_{\text {sys}} = W = 4f_3\times vdt - 4f_2\times 2vdt = 4(Ma/2 - 2(m+M/4)a)\times vdt = -8ma\times vdt

As usual, real work must be calculated by integrating EACH force through ITS point of application, THEN you add up all the contributions to the total net work. Here, as in all cases of deformation or rotation, there is the possibility that different forces act through different distances. The case here is particular striking, because the force at the top of the wheel acts through twice the distance of the force at the hub.

It may at first glance seem odd that the work done by the wheels on “sys” is negative. However, note that “sys” increases the wheels’ translational and rotational motions, thereby increasing the energy of the wheels, so there must be a small decrease in the energy of “sys” associated with giving energy to the wheels. There is of course a much larger decrease in chemical energy associated with accelerating the mass of the “sys” system.

Compare with car: \Delta E_{\text {car}} = \Delta E_{\text {sys}} + \Delta E_{\text {wheels}} = 0, so \Delta E_{\text {sys}} = -\Delta E_{\text {wheels}}, which is negative, with \Delta E_{\text {wheels}} = 8ma\times vdt

The energy of one wheel is E = K_{\text {trans}} + K_{\text {rot}} = \frac{1}{2}mv^2 + \frac{1}{2}(mR^2)(v/R)^2 = mv^2

The rate of energy change is dE/dt = 2mv\times dv/dt = 2mva, and for 4 wheels we have dE/dt = 8mva. In a time dt the amount of work done on the wheels is 8ma\times vdt, which is indeed what we found above.

Bruce Sherwood

Posted in Uncategorized | 4 Comments

GlowScript: 3D animations in a browser

You are invited to try out GlowScript (“Graphics Library on Web”), an easy to use 3D programming environment inspired by VPython, but which runs in a browser window. GlowScript has been developed by David Scherer (the originator of VPython) and me.

Thanks to the use of the RapydScript Python-to-JavaScript compiler, you can write programs using the syntax and semantics of VPython. With some limitations, a VPython program can now run either in “classic” mode after installing Python and the VPython module, or in a browser. VPython can also be used in a Jupyter notebook, thanks to the vpython module created by John Coady and extended by Ruth Chabay and me, which uses the GlowScript libraries to display 3D animations in a browser-based notebook.

In addition to writing programs using the VPython definitions, you can write programs in JavaScript or RapydScript. Just change the name “VPython” in the header line of the program to one of these other languages.

* At glowscript.org, click Example programs to see the kinds of things that VPython can do in a browser. When viewing a list of programs you can click View to see the VPython program, or when running a program you can click Edit this program to see the code.

* Click Help in the upper right corner of the window for detailed documentation on GlowScript VPython (and a link to more technical documentation on using JavaScript or RapydScript, and on how to use the GlowScript text editor).

* GlowScript uses the WebGL 3D graphics library that is included in current versions of major web browsers. You must have a modern graphics card with Graphics Processing Units (GPUs). GlowScript even works in browsers on smartphones and tablets.

* To write your own programs, log in (you’ll be asked for a Google login, such as a gmail account).

* Click “Run this program” or press Ctrl-1 to execute your program in the same window, then click “Edit this program” to return to editing.

* Alternatively, while editing press Ctrl-2 to execute your program in a separate window, so that you can view the execution and the program code simultaneously. After making edits, press Ctrl-2 in the editor to run the new program.

* While running a program, click Screenshot to capture a thumbnail image for your program page.

* In the editor, click Share this program to learn how to let other people run your program.

There is a version system in place that will allow old programs to continue running in the future. The first line of a program you write is automatically created to be “GlowScript X.Y VPython” (where X.Y is the current version number). When a new version comes out, the software for running the older version is retained for use whenever a program with an old version number is encountered.

There is now a user forum connected to glowscript.org, where you can describe your experiences or ask for assistance.

WebGL’s emphasis on the use of the Graphics Processing Unit (GPU) available on modern graphics cards makes it possible for GlowScript to do high-quality graphics.

For users of VPython, note that the VPython Help summarizes the main differences between Classic and GlowScript versions of VPython. Also available there is a Python program for converting VPython programs to GlowScript programs.

The GlowScript libraries have been implemented by Brian Marks in Trinkets and in the new Jupyter version of VPython mentioned above.

If you are new to programming, you may find the Python tutorials at www.codecademy.com very helpful.

Bruce Sherwood

Posted in Uncategorized | 10 Comments

The speed of light in a material

This is in response to the question, “So the speed of light differs depending on medium, right? Is this also true for neutrinos?” This question from a friend was prompted by the recent measurements at CERN suggesting that neutrinos might travel very slightly faster than light. (Note added September 2012: There turned out to be a problem with the experiment, and there is no evidence for neutrinos traveling faster than light.)

Actually, there is an important sense in which one can (and should) say that the speed of light does NOT depend on the medium! On my home page, see my article “Refraction and the speed of light”. If you accelerate charges, they radiate light. Light consists of traveling waves of electric and magnetic fields: see What is Light? What are Radio Waves?.

There is an extremely important though underrated property of charges and fields called the “superposition principle”: The value of the electric or magnetic field at a location in space is the vector sum of all the fields contributed by all the charges in the Universe, AND THE CONTRIBUTION OF ANY PARTICULAR CHARGE IS UNAFFECTED BY THE PRESENCE OF OTHER CHARGES.

It is the capitalized portion of the principle that despite its innocent-sounding content leads to quite counterintuitive consequences. For example, you’ve probably heard that a metal container shields out electric fields made by charges outside the container. False! There is no such thing as “shielding”. By the well validated superposition principle, the field at any location inside the metal container includes the field contributed by external charges. However, it LOOKS as though the metal prevents the field from getting in, because the external charges “polarize” the metal by shifting the mobile electrons in the metal, and the polarized metal contributes an additional electric field inside the container that is equal in magnitude but opposite in direction to the field contributed by the external charges. The effect is indeed as though the metal “shielded” the interior, but the actual mechanism has nothing to do with “shielding”, and the field due to the external charges is most definitely present inside the container.

Consider a cubical box with metal walls, and there’s a positive charge to the right of the box. That positive charge makes an electric field through the region, and that field causes (negatively charged) mobile electrons in the metal to move to the right, toward the external positive charge. That makes the right side of the box have an excess negative charge, and it leaves the left side with a deficiency of electrons, hence a positive charge.

By convention, the direction of electric field is said to be in the direction that a positive charge would be pushed, so the electric field inside the box due to the external positive charge is to the left. Note that the “polarization” charges, negative on the right side of the box and positive on the left side of the box, contribute a field inside the box to the right. The 1/r squared character of the electric field of point charges leads to the surprising result that the field inside the box contributed by the polarization charges is exactly equal in magnitude and opposite in direction to the field contributed by the external charge, so the vector sum of the field contributions of all the charges is in fact zero inside the box, as though the metal “shielded” the interior.

Back to the case of light, which is produced by accelerated charges. If you accelerate charges for a short time, they radiate a short pulse of light. Let’s accelerate some charges somewhere off to the left, for a short time. Light (electric and magnetic fields) propagates in all directions, but we’re interested in the light traveling to the right, toward a detector (which could be a camera) some known distance from the “source” (the accelerated charges). We measure the time from when we briefly accelerated the charges to when we detect the light a known distance away. Divide distance by time and get the speed of light in air, 3e8 m/s.

Now let’s repeat the experiment, except that there’s a thick slab of glass between the source and the detector. You’ve surely heard that “light travels much slower in glass than in air”, so you would expect the light to take significantly longer to reach the detector now that the glass is in place. But that’s not what happens! You find the same time interval between the emission and the first light reaching the detector, and you determine the same 3e8 m/s speed as before! And you must, because the field at any location in space is the vector sum of the field contributions of all the charges in the Universe, unaffected by the presence of other charges (in this case, the electrons and protons in the glass). The fields radiated by the accelerated charges are unaffected and reach the detector in the same amount of time as before.

However, there is an effect. As the electric field passes through the glass, it accelerates the electrons and protons (it accelerates the electrons much more than the protons, due to their very low mass). These accelerated electrons radiate electromagnetic radiation, like any accelerated charges. The traveling fields of this re-radiation also come to our detector, so that the shape of the pulse we receive is altered from what we saw without the glass, because there are now additional field contributions that were not present in the absence of the electron-containing glass. The first bit of light shows up on time, but then the situation becomes quite complicated.

An important special case is that where the source charges off to the left are accelerated not for a short time, but continuously, sinusoidally up and down (which involves accelerations as the charges move faster and slower and turn around). If you turn on this sinusoidal radiation abruptly, of course you’ll first see some light at the detector on time, with or without the glass being present. But let the sinusoidal acceleration of those source charges continue for a long long time. It can be shown that the vector sum of this radiation and the re-radiation from electrons accelerated in the glass leads to a detection of sinusoidal radiation, and that sinusoidal radiation has a phase which is shifted. That is, the peaks come at a different time than they did without the glass. In fact, in the “steady state”, the peaks come later than they used to, and the lateness is proportional to how thick the glass is. It is a useful shorthand to say that the “light travels more slowly in the glass”, as that description is consistent with the phase delay of peaks in the sinusoid, in the steady state, even though the speed of light in the glass is the usual 3e8 m/s. (The initial transient is messy, and not a simple sinusoid.)

Richard Feynman in the famous Feynman Lectures on Physics discusses this quantitatively in Chapter 1-31 on “The Origin of the Refractive Index”. The “refractive index” is usually denoted by n, and it is common practice to say that “the speed of light in a medium with refractive index n is 3e8 m/s”. But in fact the speed of light is a universal quantity. Although it is very often convenient to pretend that the speed of light is slower in glass, that’s just a calculational convenience — it’s a misleading description of what’s really going on. In fact, the refractive index and “speed of light” in glass is different for different frequencies of the sinusoidal radiation, because different frequencies of electric field affect the motion of the electrons differently in the glass.

The interaction of the electric field of the light with the matter (glass or whatever) can be (for nonobvious reasons) well modeled by the electric field exerting a force on an outer electron in an atom in an insulator such as glass as though the electron were bound to the atom by a spring-like force, with damping. The details of the spring stiffness and damping depend on the material and on the frequency of the electric field. In some materials this works out in such a way that in the downstream electric field (the sum of the field contributed by the accelerated source charges and the re-radiation by the accelerated electrons in the material) the peaks can actually be earlier than in the absence of the intervening material, in which case it looks as though the speed of transmission is actually faster than 3e8 m/s. But it is of course still the case that the first detection downstream occurs at 3e8 m/s.

Incidentally, when in the steady state light is traveling through glass, the frequency of the light in the glass (how many cycles of the sine function occur per second) is the same as the frequency of the light in the air. The speed with which a crest of the sine wave advances (the phase speed) is the distance between crests (the wavelength) divided by the time for one cycle, which is 1/frequency. Because the phase speed is slower in the glass, the wavelength is shorter in the glass than in the air: the crests are pushed closer together.

As to whether the (apparent) speed of propagation of neutrinos would differ in different materials, I think not. The change in phase speed for light is due to the rather strong interaction of light with matter, leading to re-radiation. Neutrinos have an amazingly small probability of interacting with matter, which is why one can detect them after they’ve traveled hundreds of kilometers through solid rock. So I wouldn’t expect matter to have any effect on the speed of neutrinos.

Bruce Sherwood

Posted in Uncategorized | 15 Comments