1997: While at Carnegie Mellon, after writing a volume on introductory electricity and magnetism, Ruth Chabay and I teach introductory “modern mechanics” for the first time, including having the students program computational models, using the cT language I had created, something a bit like a good Basic, with (2D) graphics built-in, but running in a windowing environment on Unix workstations, Macintosh, and Windows (cT overview). cT was based on the TUTOR language of the PLATO computer-based education system.

1998: We have a remarkable student in our mechanics class, David Scherer. While in high school he led a team of his friends to create a 3D game that later won a national prize. He’s intrigued that cT has allowed students to write computational models that work on all platforms, but he glimpses a more powerful approach that would support 3D.

2000: We abandon cT, and in the spring Scherer creates VPython, with Ruth and me deeply involved in design and testing. Many powerful programmers have no interest in or patience for novice programmers, but Scherer saw it as an interesting challenge how to make programmatic 3D animations accessible to novices. His answer is to make real-time navigable 3D animations a side effect of computations, lifting a huge task from the shoulders of the novice. Of course this is also a huge benefit to sophisticated programmers as well. The original version of VPython is now called “Classic” VPython. It requires installing Python, the “visual” module, and an improved program editor based on the IDLE editor that comes with Python. In the fall of 2000 we start having students use VPython to do computational modeling in our course.

2002-2006: Jonathan Brandmeyer, an engineering students at NCSU, makes major contributions to VPython 3. He introduces the use of the C++ Boost libraries to glue the core of VPython, implemented in threaded C++ code, to those components written in Python, and builds autoconfigurable installers for Linux. In the 16 year history of VPython only three people made major contributions to the complex C++ code, Scherer, Brandmeyer, and me.

2008: Scherer, having sold his first software company and thinking about what to do next, and I work on VPython 5. Jonathan Brandmeyer provided support in VPython 4beta for opacity, local lighting, and textures, and made some important architectural changes, but had to stop work on the project before it was completed. Further development led to API changes that were incompatible with the VPython 4beta release, so there was no version 4.

2011: Kadir Haldenbilen, a retired IBM engineer in Turkey, and I collaborate to create the 3D text object and the extrusion object for VPython 5.

2011: Ruth and I learn about WebGL from very knowledgeable computer colleagues in Santa Fe. WebGL is a 3D library built into modern browsers. I poke at it and see that it is quite difficult to use, like its big sister OpenGL used by Classic VPython, but I realize that the VPython API provides a model for making WebGL accessible to novices. I mock up a demo and show it to Scherer, who at the time was CEO of his second major software company. He’s intrigued and in a couple of months puts together the glowscript.org site (it’s a Google App Engine application), solves the problems of operator overloading (to permit adding vectors A and B as A+B) and synchronous code (such as perpetual loops), neither of which is native to JavaScript. He does all this as a project where he can see progress, as relief from his work at FoundationDB where the extreme difficulty of solving once and for all the problems of distributed databases has been getting him frustrated. After setting up GlowScript he leaves, and since then I’ve been developing GlowScript. GlowScript programs are written in JavaScript, as are the GlowScript libraries.

2013: Release of VPython 6, based on wxPython, which was initiated by me in June 2012 to address the serious problem that the Carbon programming framework for the Mac will no longer be supported. Major contributions to the release were made by Steve Spicklemire, a physics professor at the University of Indianapolis.

Late 2014: Thanks to learning from Salvatore di Dio, a programmer in France, about the RapydScript Python-to-JavaScript transpiler, I’m able to make it possible for GlowScript users to write their programs using the VPython API, somewhat modified due to the very different environment (browser, GPU). It is also around this time that John Coady, a programmer in Vancouver, implements the Classic VPython API in pure Python, in the IPython environment, in which your Python program runs on a local server and sends data to your browser, where his JavaScript program acts on the data to display the 3D animation in a Jupyter notebook cell (the Jupyter notebook in the browser is similar to a Mathematica or Sage notebook). He uses the GlowScript libraries to render the 3D images. The advantage of this Jupyter implementation in comparison with GlowScript VPython is that you’re writing real Python, not the necessarily imperfect RapydScript representation of Python, and you have access to the large universe of Python modules, which are not accessible from within a JavaScript-based browser.

Fall 2015: Some institutions using our textbook, including Georgia Tech, report switching from Classic VPython to GlowScript VPython, and note with surprise how much more enthusiastic students are about using VPython now that they don’t have to install anything. In contrast, Classic VPython requires the installation of Python, the installation of the visual module, and, on the Mac, installation of an update to Tcl. This can be daunting and can fail for non-obvious reaons. The use of GlowScript VPython rises rapidly; here is a graph of usage vs. time.

January 2016: Coady, Ruth and I, and several well-known physics education colleagues (all of them users of our textbook and of VPython) publish a document on the further evolution of VPython, in which we announce abandonment of the 16-year-old Classic VPython in favor of the GlowScript and Juptyer versions. Here is that document, detailing our reasons.

January-September 2016: In collaboration with Coady, Ruth and I modify and complete Jupyter VPython to use the GlowScript VPython API instead of the Classic API that Coady had started with, because it is much better suited to the distributed nature of the Jupyter environment. Steve Spicklemeire and Matthew Craig, a physics professor at Minnesota State University Moorhead, contribute mechanisms for creating pip and conda installers. Here are demo programs running in Jupyter notebooks.

July 2016: 3D text object implemented in GlowScript, with major contributions from Kadir Haldenbilen. I complete the GlowScript implementation of the extrusion object.

February 2017: I make the 3D text object and extrusion object available in Jupyter VPython.

*Bruce Sherwood*

]]>

In 1971 in the context of the big PLATO computer-based education project at UIUC I had several physics grad students working with me to develop a PLATO-based mechanics course. They and I each picked an important/difficult mechanics topic and started writing tutorials on the topics. Lynell Cannell was assigned energy and I became concerned that she was the only member of the group not making progress. I was about to have a talk with her about this when she came to me to say that she was hung up on a simple case.

She said, “Suppose you push a block across the floor at constant speed. The net force (your push and the opposing friction force) is zero, so choosing the block as the system no work is done, yet the block’s temperature rises, so the internal energy is increasing. I’m very confused.” I said, “Oh, I can explain this. You just, uh, well, you see, uh…..I have no idea.”

We went and talked to Jim Smith, an older physicist very interested in education, very smart, and a good mentor for my then-young self. Jim had thought it through and explained the facts of life to us, with a micro/meso model of the deformations that occur at the contact points on the underside of the block, such that the work done on the block is different from the pseudowork done on the block.

I got very interested in the matter and fleshed out Jim’s insight in more and more detail, but when I showed my analyses to physics colleagues they weren’t having any. Finally I decided to send my paper to AJP (the American Journal of Physics), and the reviewers rejected it. One reviewer said, “Sherwood applies Newton’s 2nd law to a car, which is illegitimate, because a car isn’t a point particle.” I sent it to The Physics Teacher, and the editor replied that he wouldn’t even send it out to reviewers because the physics was so obviously completely wrong.

I asked AJP for an editorial review, and the reluctant response by an associate editor was, “Well, I guess Sherwood is right….but that’s not how we teach this subject!” Finally, in 1983, AJP did reluctantly print the paper “Pseudowork and real work” which you’ll find on my website. This was the first half of the original paper. The second half, applying the theory to the case of friction, “Work and heat transfer in the presence of sliding friction” (also available on my web site), was published jointly in 1984 with William Bernard, because AJP had received a related paper from Bernard and put the two of us in contact with each other.

At that time there had been some short articles in AJP on the topic, but there hadn’t been a longer article on all the aspects. In fact, given physicist resistance to the truth, Bernard was engaged in a war of attrition, sending short articles to AJP on various aspects of the problem, trying to build up to the full story. Nor had there been any article on friction.

The grand old man of Physics Education Research (PER), Arnold Arons, was a fan of my first paper and summarized it in his books on how to teach intro physics (*A Guide to Introductory Physics Teaching*, 1990, and *Teaching Introductory Physics*, 1997).. Even he however was quite skittish about the friction analysis, in large part because he was strenuously opposed to mentioning atoms in the intro physics course, for philosophical reasons. Arons tried to explain the pseudowork issue to his friend Cliff Schwartz, the editor of The Physics Teacher, but he never succeeded; Schwartz remained forever convinced that this was all massively wrong.

After the papers were published, in 1983 I wrote to Halliday and Resnick about the matter, emphasizing that their textbook was certainly not alone in mishandling the energetics of deformable systems. I got a nice letter back from Halliday which said about their book, “Let me say at once that we are well aware of its serious flaws, along precisely the lines that you describe. We have tried several times to patch things up in successive printings but the matter runs too deep for anything but a total rewrite. We have, in fact, such a rewrite at hand, awaiting a possible next edition.” I have the impression that this major rewrite never occurred, as I don’t know of an edition that fully addresses the issues. It is amusing that Ruth Chabay and I were given the 2014 AAPT Halliday and Resnick Award for Excellence in Undergraduate Teaching (here is a video of our talk on the occasion, dealing with thinking iteratively).

Most textbooks make major errors in the energetics of deformable systems, or simply ignore the issues. A few textbooks have a brief section on related matters, but as Halliday discerned, handling the physics correctly requires significant revisions throughout introductory mechanics. Since the early 1980s there have been many good articles about these matters in AJP, with little impact on the teaching of introductory physics. In 2008 John Jewett published a solid five-part tutorial on the subject in The Physics Teacher.

In my original articles the analysis is couched in terms of the two different integrals, for work and for pseudowork. We found that even strong Carnegie Mellon students had difficulty distinguishing between these two very similar-looking integrals. So eventually we changed our textbook to emphasize two different systems (point-particle and extended) instead of two different integrals. The distinction between the two systems is more vivid than the subtle distinction between the two integrals.

The point-particle model of a system has the mass of the system that is modeled as an extended system and that moves along the same path as the center of mass of the extended system. The change in kinetic energy of the point-particle model is given by the integral of the net force acting at the location of the point particle, and this is equal to the change in the translational kinetic energy of the extended system. The change in the total energy of the extended system is equal to the sum of the integrals of each force along the path of its point of attachment to the system.

Here is a video of an apparatus that shows the effects. Two pucks are pulled with the same net force, but one is pulled from the center and doesn’t rotate, whereas the other puck has the string wound around the disk, and it rotates. Somewhat surprisingly, the two pucks move together, but in fact the Momentum Principle guarantees that the centers of mass of the two pucks must move in the same way if the same net force is applied. Here is a computer visualization of the situation.

*Bruce Sherwood*

]]>

I found myself puzzled about blackbody radiation, a topic in the course. One common way of talking about blackbody radiation is to imagine an oven with a small hole from which radiation escapes, and to imagine the oven walls to be made of material with evenly spaced energy levels that emit similarly quantized photons. But no solid material has such an energy level scheme (the Einstein solid is such a material but is just a highly simplified though useful model), so what’s going on? (The proper approach is to quantize the electromagnetic field, not the emitters.)

I decided to make an appointment to talk with Feynman about this. I was acutely aware that my question was likely to sound hopelessly confused and naive, so I overprepared for the meeting and spoke really fast to try to show that there was an issue. As I expected, initially his face darkened as he wondered why this idiot had been allowed on campus, let alone teaching the curriculum he had created. But by continuing to talk really fast I got far enough to get him intrigued, and he could see that there was an interesting question, and we had an interesting and substantive conversation.

We considered together an astronomically large cloud of atomic hydrogen with some initial energy in the form of atomic excitation. This cloud will emit a line spectrum, not blackbody radiation, yet thermodynamics tells us that eventually the cloud (together with the radiation) will reach thermal equilibrium and the energy distribution of the radiation will be the blackbody continuum. How does the cloud and radiation get from the initial state to this equilibrium state? (It’s not quite an equilibrium state because energy is being radiated away.)

We could see that various processes would alter the initial line spectrum, including doppler effect (from recoil associated with emission) and collisional broadening. So there’s no mystery in the fact that we don’t expect a clean line spectrum to persist. The details of the transient that gets us from initial state to final state may be quite complex, and hard to calculate in detail, but just recognizing that there must be a transient gives a sense of mechanism that is lacking if the final state is presented with no preamble.

Next Feynman supposed that somewhere in the cloud is a speck of dust. (I can no longer remember whether this was special magic dust with evenly spaced energy levels.) Thermodynamics assures us that eventually the hydrogen atoms must come into thermal equilibrium with that speck of dust. Thermodynamics has great power in this respect, but using it alone tends to remove all sense of mechanism.

I believe that this was my first experience of the sense of mechanism that comes from discussing the transient that leads to establishing an equilibrium state or a steady state. In retrospect I think that as a student I was somewhat puzzled by how certain states came into being, but as far as I can remember there was no talk of the transients. Another person who influenced my thinking on this was Mel Steinberg, the creator of the CASTLE electricity curriculum. In the late 1980s I took an AAPT (American Association of Physics Teachers) workshop from him that included desktop experiments with half-farad capacitors (“supercaps”). One of the things he stressed was that the several-second time constant for charging or discharging could be usefully thought of as an observable transient leading to an equilibrium state. This viewpoint in turn influenced the emphasis in Matter & Interactions on the transient that leads to the steady state in DC circuits.

Another influence on both Ruth Chabay and me was doing numerical integrations with a computer, where you gain the strong sense of things happening step by step, not just described in terms of an analytical solution that is a known function of time, which gives little sense of the time evolution of the process.

All of this has had a big effect on the Matter & Interactions curriculum, plus Ruth Chabay’s insight that computational modeling must be a part of introductory physics. Thinking Iteratively is a 30-minute video of a talk we gave at the summer 2014 AAPT meeting which includes examples of these issues.

*Bruce Sherwood*

]]>

My goal for this article is to give a small taste of geometric algebra to give a sense of its structure and to illustrate how it can span diverse branches of mathematics that physicists currently study in isolation from each other.

The fundamental entity in geometric algebra is the “multivector” consisting in 3D of four elements: scalar, vector, bivector (a 2D surface with a directed normal), and trivector (a 3D solid). Geometric algebra can also be used in 2D, or in dimensions higher than 3D, but for purposes of a brief introduction we’ll stick with the 3D context. One writes a multivector as a sum: scalar + vector + bivector + trivector. This may look odd, since one is taught that “you can’t add a scalar and a vector,” but note that one often writes a vector in the form where three very different things are added together. From a computer programming point of view, one might think of a multivector as a heterogeneous list: [scalar, vector, bivector, trivector], with methods for operating on such lists.

Fundamental to geometric algebra is the “geometric product,” where *a* and *b* are multivectors. This product is defined in such a way that multiplication is associative, *abc = (ab)c = a(bc)*, but it is not necessarily commutative; *ba* is not necessarily equal to *ab*. If *a* and *b* are ordinary vectors, the geometric product is , where is a bivector that is (only in 3D) closely related to the ordinary vector cross product ( is pronounced “wedge”). For vectors *a* and *b* the geometric product *ba* will not be equal to *ab* if the wedge product is nonzero, since .

The dot product (the scalar part of *ab*) measures how parallel the two vectors are, while (the bivector part of *ab*) measures how perpendicular they are. Together these two measures provide all the information there is about the relationship between the two vectors and thereby captures important information that neither the dot product nor cross product alone provide. Another way of saying this is that the dot product is the symmetric part of *ab* and the wedge product is the antisymmetric part of *ab*.

One way to represent the wedge product of two vectors geometrically is to draw the two vectors tail to tail and make these two vectors the sides of a parallelogram. The area of the parallelogram is the magnitude of the bivector. Compare with the magnitude of the vector cross product and you’ll see that this is equal to the area of the parallelogram associated with the two vectors.

We’ll investigate some basic aspects of geometric algebra by starting with three ordinary vectors , , and that are unit vectors in the *x*, *y*, and *z* directions. The geometric product , because the wedge product of a vector with itself has no area, so the bivector part of is zero; similarly for the other two unit vectors.

The quantity is a unit bivector which can be represented as a 1 by 1 square in the *xy* plane (the dot product is zero because the two vectors are perpendicular to each other). Similarly is a unit bivector in the *yz* plane and is a unit bivector in the *zx* plane. The wedge product is antisymmetric, so ; similarly for the other unit vectors.

Next, consider the geometric product of these bivectors with the unit vectors, using the fact that the geometric product is associative and that :

We have similar results for other products of the bivectors and the vectors.

What is ? This is a “trivector,” a cube 1 by 1 by 1. Something surprising results if we multiply this unit trivector by itself:

This result justifies identifying the trivector with the imaginary number *i*. Now consider this:

The bivector lies in the *yz* plane. The standard vector cross product of and points in the *+x* direction, which is . The familiar cross product vector is the normal to the associated bivector (in 3D only), and evidently the bivector is *i* times the cross product vector. Similarly, you can show that and . It turns out that bivectors are more useful and better behaved than their “duals,” the cross products. For example, in the old vector world one must sometimes make subtle distinctions between “polar” vectors (the ordinary kind) and “axial” vectors which behave differently under reflection (examples are magnetic field vectors). In geometric algebra there is no such distinction.

When I first saw these relationships among the , , and , I was amazed. As a physics student I was introduced to the 2 by 2 “Pauli spin matrices” used to describe electron spin. The matrices, and their various product and commutation relationships, were taught as something special and particular to quantum mechanical spin systems. I was astonished to find that those 2 by 2 matrices behave exactly like the unit vectors in the geometric algebra context, as discussed above. This is an example of Hestenes’ argument that the mathematical education of physicists fails to bring together diverse branches of mathematics that can be unified in the geometric algebra context.

Another example of a need for unification is that as a physics student one encounters many different schemes for handling rotations. There is a beautiful representation of rotations in geometric algebra. Consider the geometric product *abb = a(bb) = a* if *b* is a unit vector. If we write this as *(ab)b = a*, and consider *ab* to be a rotation operator, you see that *ab* can be thought of as a rotor that rotates *b* into *a* (there is also scaling if one doesn’t use unit vectors).

For extensive treatments of geometric algebra, see for example the textbooks “Geometric Algebra for Physicists” and “Geometric Algebra for Computer Science.”

*Bruce Sherwood*

]]>

The great discovery by Maxwell about 150 years ago of the real nature of light stands as one of the greatest discoveries in all of human history. The goal of this talk was to share with people what light really is, because its nature is not widely understood. I also wanted to demistify “electromagnetic radiation” and “electric fields”, terms that for many people are rather scary due to a lack of understanding of what the terms really mean.

Technical comment for physicists: As a result of preparing and giving this talk, I had a minor insight about the physics of light. A colleague has argued that magnetic fields are merely electric fields seen from a different reference frame. I’ve argued that this isn’t the whole story. I offer several examples that show that magnetic fields are not simply relativistic manifestations of electric fields.

(1) All experiments on electrons to date are consistent with them being true point particles, with zero radius, yet they have a magnetic moment even when at rest. There is no reference frame moving with constant velocity in which the magnetic field of the electron vanishes.

(2) Light consists of electric and magnetic fields that are perpendicular to each other, propagating at the speed of light. There is no physically realizable reference frame in which it is possible to transform away the magnetic field.

(3) Here is my minor recent insight: In the classical wave picture, light is produced by accelerated charges. Because the velocity is constantly changing, there is no constant-velocity reference frame in which the charge is at rest, and in which the magnetic field of the charge vanishes.

*Bruce Sherwood*

]]>

Concerning calculus, I would say that I’m not sure the situation has actually changed all that much from when I started teaching calculus-based physics in the late 1960s. Looking through a 1960s edition of Halliday and Resnick, I don’t see a big difference from the textbooks of today.

More generally, there is a tendency for older faculty to deplore what they perceive to be a big decline in the mathematical abilities of their students, but my experience is that the students are adequately capable of algebraic manipulation and even calculus manipulation (e.g. they know the evaluation formulas for many cases of derivatives and integrals). What IS however a serious problem, and is perhaps new, is that many students ascribe no meaning to mathematical manipulations. Here is an example that Ruth Chabay and I have seen in our own teaching:

The problem is to find the final kinetic energy. The student uses the Energy Principle to find that joules. Done, right? No! Next the student uses the mass to determine what the final speed is. Then the student evaluates the expression (and of course finds 50 joules). Now the student feels that the problem is solved, and the answer is 50 joules.

We have reason to believe that what’s going on here is that kinetic energy has no real meaning, rather kinetic energy is the thing you get when you multiply times times the square of . Until and unless you’ve carried out that particular algebraic manipulation you haven’t evaluated kinetic energy.

Another example: A student missed one of my classes due to illness and actually went to the trouble of coming to my office to ask about what he’d missed, so he was definitely above average. The subject was Chapter 12 on entropy. I showed him an exercise I’d had the class do. Suppose there is some (imaginary) substance for which . How does the energy depend on the temperature? I asked him to do this problem while I watched. (The solution is that , so .) The student knew the definition , but he couldn’t even begin the solution. I backed up and backed up until finally I asked him, “If , what is ?” He immediately said that . So I said, okay, now do the problem. He still couldn’t! His problem was that he knew a canned procedure that if you have an , and there’s an exponent, you put the exponent in front and reduce the exponent by one, and that thing is called “” but has no meaning. There is no way to evaluate starting from , because there is no , there is no , and nowhere in calculus is there a thing called .

We are convinced that an alarmingly large fraction of engineering and science students ascribe no meaning to mathematical expressions. For these students, algebra and calculus are all syntax and no semantics.

A related issue is the difficulty many students have with formal reasoning, and here there may well be a new problem. It used to be that an engineering or science student would have done a high school geometry course that emphasized formal proofs, but this seems to be no longer the case. Time and again, during class and also in detailed Physics Education Research (PER) interviews with experimental subjects we see students failing to use formal reasoning in the context of long chains of reasoning. An example: Is the force of the vine on Tarzan at the bottom of the swing bigger than, the same as, or smaller than ? The student determines just before and just after and correctly determines that points upward. The student concludes correctly that the net force must point upward. The student determines that the vine pulls upward and the Earth pulls downward. The student then says that the force of the vine is equal to ! Various studies by Ruth Chabay and her PER grad students have led to the conclusion that the students aren’t using formal reasoning, in which each step follows logically from the previous step. Often the students just seize on some irrelevant factor (in this case, probably the compiled knowledge that “forces cancel”).

This problem with formal reasoning may show up most vividly in the Matter & Interactions curriculum, where we want students to carry out analyses by starting from fundamental principles rather than grabbing some secondary or tertiary formula. We can’t help wondering whether the traditional course has come to be formula-based rather than principle-based because faculty recognized a growing inability of students to carry out long chains of reasoning using formal procedures, so the curriculum slowly came to depend more on having students learn lots of formulas and the ability to see which formula to use.

Coming back to calculus, I assert that our textbook has much more calculus in it than the typical calculus-based intro textbook. This may sound odd, since we have had students complain that there’s little or no calculus in our book (we heard this more often from unusually strong students at Carnegie Mellon than at NCSU). The complaint is based on the fact that we introduce and use real calculus in a fundamental way right from the start, but many students do not see that the sum of a large number of small quantities has anything to do with integrals, nor that the ratio of small quantities has anything to do with derivatives. For formula-based students, has nothing to do with calculus, despite our efforts to help them make a link between their calculus course and the physics course.

*Bruce Sherwood*

]]>

The aspect of quantum mechanics that is pretty widely known and accepted is that small objects (atoms, electrons, nuclei, molecules) have “quantized” properties and that when you go to measure one of these quantized properties you can get various results with various probabilities. For example, an electron can have spin “up” or “down” (counterclockwise or clockwise rotation as seen from above; the Earth as seen from above the North Pole rotates counterclockwise, and we say its spin is up if we take North as “up”). The Earth, being a large classical object, can have any amount of spin (the rate of rotation, currently one rotation per 24 hours). The electron on the other hand always has the same amount of spin, which can be either up or down.

Pass an electron into an apparatus that can measure its spin, and you always always find the same amount of spin, and for electrons not specially prepared you find the spin to be “up” 50% of the time and “down” 50% of the time. (It is possible to build a source of “polarized” electrons which, when passed into the apparatus, always measure “up”, but the typical situation is that you have unpolarized electrons, with 50/50 up/down measures.) It is a fundamental discovery that with a beam of unpolarized electrons it is literally impossible – not just hard, but impossible – to predict whether the spin of any particular electron when measured will be up or down. All you can say is that there is a 50% probability of its spin being up. It’s also possible to prepare a beam of partially polarized electrons, where for example you know that there is a 70% probability of measuring an electron’s spin to be up, but that’s all you know and all you can know.

So much for a review of the probabilistic nature of measuring a quantized property such as spin for a single tiny object. Next for the aspect of quantum mechanics that is less widely appreciated, which has to do with measures on one of a group of tiny objects. A simple case is two electrons that are in a “two-particle state”, where one can speak of a quantized property of the combined two-particle system. For example, in principle it would be possible to prepare a two-electron state with total spin (“angular momentum”) zero, meaning that electron #1 could be up, and electron #2 would be down, or vice versa. As a matter of fact, it was only in the last few decades that experimental physicists learned how to prepare such multiparticle states and make measurements on them, and it is these experiments, together with superb theoretical analyses, that have clarified the issues that worried Einstein. (Actually, most experiments have involved photons rather than electrons, but I’ve chosen the two-electron system as being more concrete in being able to make analogies to the spinning Earth.)

Suppose Carl prepares a zero-spin electron pair and gives one electron to Alice and the other to Bob (Alice and Bob are in fact names used in the scientific literature to help the reader keep straight the two observers.) Alice and Bob keep their electrons in special carrying cases carefully designed not to alter the state of their electron. They get in two warp-speed spaceships and travel to opposite ends of our galaxy or, if one prefers, to opposite ends of the Universe (if the Universe has ends). Many years later, Alice measures the state of her electron and finds that its spin is up (there’s an arrow pointing up on her carrying case indicating what will be called “up”, and a similar arrow pointing up on Bob’s carrying case). If Bob measures his electron, he will definitely find its spin to be down.

One might reasonably interpret these observations something like this: Carl happened to give Alice an “up” electron and (necessarily) gave Bob a “down” electron. There was a 50/50 chance of giving Alice an up electron, and this time Carl happened to give her an up electron. Then of course no matter how long Alice waits before measuring her electron, she’s going to find that it is “up”, and no matter how long he waits Bob is going to find that his electron is “down”. Yes, there are probabilities involved, because neither Carl nor Alice knows the spin of the electron until Alice makes her measurement, but the electron obviously “had” an up spin all the time.

The amazing truth about the Universe is that this reasonable, common-sense view has been shown to be false! The world doesn’t actually work this way!

Thanks to major theoretical and experimental work over the last few decades, we know for certain that until Alice makes her measurement, her electron remains in a special quantum-mechanical state which is referred to as a “superposition of states” – that her electron is simultaneously in a state of spin up AND a state of spin down. This idea is very hard to accept. Einstein never did accept it. In a famous paper in the 1930s, he and a couple of colleagues proposed experiments of this kind and, because quantum mechanics predicts that the state of Alice’s electron will remain in a suspended animation of superposed states, concluded that quantum mechanics must be wrong or at least incomplete. It took several decades of hard work before experimental physicists were able to carry out ingenious experiments of this kind and were able to prove conclusively that, despite the implausibility of the predictions of quantum mechanics, quantum mechanics correctly describes the way the world works.

I find it both ironic and funny that Einstein’s qualms led him to propose experiments for which he quite reasonably expected quantum mechanics to be shown to be wrong or incomplete, only for it to turn out that these experiments show that the “unreasonable” description of nature provided by quantum mechanics is in fact correct. These aspects of quantum entanglement aren’t mere scientific curiosities. They lie at the heart of work being done to implement quantum computing and quantum encryption.

What about relativity, and that nothing can travel faster than light? Not a problem, actually. The key point is that Alice cannot send any useful information to Bob. She cannot control whether her measurement of her electron will be up or down. Once she makes her “up” measurement, she knows that Bob will get a “down” measurement, but so what? And all Bob knows when he makes his down measurement is that Alice will make an up measurement. To send a message, Alice would have to choose to make her electron be up or down, as a signal to Bob, but the act of forcing her electron into an up or down state destroys the two-electron “entangled” state.

I recommend a delightful popular science book on this, from which I learned a lot, “The Dance of the Photons” by Anton Zeilinger. Zeilinger heads a powerful experimental quantum mechanics group in Vienna that has made stunning advances in our understanding of the nature of reality in the context of quantum mechanics. In this book he makes the ideas come alive. The book includes detailed discussions of Bell’s inequalities and much else (Bell was a theoretical physicist whose analyses stimulated experimentalists to design and carry out the key experiments in recent decades).

It seems highly likely that Zeilinger will get the Nobel Prize for the work he and his group have done. A charming feature of the book is that Zeilinger is very generous in giving credit to many others working in this fascinating field. Incidentally, there is some movement in the physics community to bring contemporary quantum mechanics into the physics major’s curriculum, which in the past has been dominated by stuff from the 1920s.

*Bruce Sherwood*

]]>

Several times at public gatherings of physicists I have heard the claim that the Feynman course at Caltech was a failure, and I have always seized the opportunity to rebut these claims from my own experience. One of the things I’ve pointed out is that at the time he gave the original lectures there was no textbook, nor were there problems keyed to the lectures, whereas by the time I lectured in the course there was a lot of infrastructure, including the book. I also point out that in a traditional intro course students don’t understand everything, and ask the audience whether it is better to understand part of a traditional textbook or part of Feynman. In my judgment the course was a success in the late 60’s at Caltech, not a failure. When I moved to UIUC in 1969, I judged that it would have worked in an honors course there.

Matthew Sands with Robert Leighton translated Feynman’s unique spoken word into print. In his memoir “Capturing the Wisdom of Feynman”, Physics Today, April 2005, page 49, Sands provided confirmation for my own viewpoint. Feynman’s own assessment in the preface that it was a failure has helped perpetrate the notion that it didn’t work, and I was glad to learn from Sands’ memoir that this was off-the-cuff, not a carefully considered judgment. Moreover, Feynman’s view was not shared by others who were involved in teaching the course. As he says in the preface, “My own point of view — which, however does not seem to be shared by most of the people who worked with the students — is pessimistic”.

Kip Thorne has written some commentary on the history of the Lectures:

http://www.basicfeynman.com/introduction.html

Lawrence Krauss’s excellent scientific biography “Quantum Man: Richard Feynman’s Life in Science” also discusses the Feynman course, and what he says is consistent with the views of Sands and me.

*Bruce Sherwood*

]]>

There are some other examples of predicting and finding previously unsuspected particles.

In the 1860s, building on preliminary, partial work by others, Mendeleev was able to bring order to all the known elements in his famous periodic table. Moreover, he correctly interpreted holes in his table as representing elements that had yet to be discovered. For example, he not only predicted the existence of germanium but also predicted its approximate atomic weight and chemical properties, and he was right. In all, he correctly predicted 8 elements that were unknown at the time. The Wikipedia article shows his debt to the ancient Sanskrit grammarian Panini, who had recognized similar kinds of order among the sounds of human speech.

At the time, no one, including Mendeleev, had any way to explain the ordering of the elements made manifest in the periodic table. It was 40 years later that Rutherford and his coworkers discovered that atoms consist of a tiny, extremely dense positively-charged core (the “nucleus”) surrounded by negatively charged electrons (which had recently been discovered by Thompson). A few years later experiments showed that the order of elements in the periodic table simply reflects the number of electrons in the atom (1 for hydrogen, 2 for helium, etc.).

In 1928 Dirac created the famous “Dirac equation”, constituting a version of quantum mechanics that is consistent with special relativity (the earlier Schrodinger equation is not consistent with relativity, though it remains useful in the nonrelativistic limit). An odd feature of the Dirac equation was its prediction of electron-like particles with negative energy, which led Dirac with some reluctance to predict the discovery of an “anti-electron”, an electron-like object with positive charge. The positron was soon found by Carl Anderson at Caltech, with the predicted properties.

In the 1920s there was a puzzle in “beta decay”, in which a nucleus emits an electron (and metamorphoses into a nucleus with one additional positive charge; see my post on neutron decay). The puzzle was that the energies of the parent and daughter nuclei were known (from their masses) to be fixed quantities, but the electron was observed to have a broad range of energies, not simply the difference of the two nuclear energies. This was an apparent violation of the well established principle of energy conservation. There were suggestions that perhaps energy is not conserved in nuclear interactions, but Pauli could not accept that. In 1930 he proposed that the electron is not the only particle emitted in beta decay, that there is also another particle emitted but not observed. This implied that the unseen particle must have no electric charge, as otherwise it would be easily detected, and in fact it must also not interact with nuclei through the “strong interaction”, because again this would make the neutrino easily detectable. Also, the maximum observed energy of the electron was experimentally found to be about equal to the energy difference of the parent and daughter nuclei, which implied that the unseen particle must have very little mass. Pauli had predicted what is now called the neutrino, with specific properties: no electric charge, very small mass, no strong interactions.

The neutrino was observed directly only much later, in 1956, when Reines and Cowan placed detectors behind thick shielding next to a nuclear reactor at the Svannah River Plant in South Carolina. Neutrino reactions are very rare, but the flux of neutrinos was so large that occasionally the experimenters observed “weak” interactions of the neutrinos with matter. The properties of the neutrino matched Pauli’s predictions.

In the early 1960s Gell-Mann and Ne’eman independently were able to classify the large zoo of “elementary” particles into groups of octets and decuplets. There was a decuplet (of 10 particles) arranged in a triangle, like bowling pins, in which the particle at the point was unknown. Gell-Mann was able not only to predict the existence of this particle, which he called the Omega-minus, but he also predicted its charge and mass. A hunt for the Omega-minus was successful, and it had the predicted properties.

As was the case with Mendeleev’s periodic table, at first there was no explanation for the “why” of octet and decuplet groupings of the known particles. Soon however Gell-Mann and Zweig independently proposed that each “baryon” (heavy) particle was made of 3 “quarks” with unusual fractional electric charges, and each “meson” was made of a quark and antiquark. At first somewhat controversial, intense experimental work and closely related theoretical work by Feynman made it clear that the quark model does indeed explain the “periodic table of the particles”.

The creation of antiprotons occurred in a context very similar to the creation of the Higgs boson. The Berkeley Bevatron was a particle accelerator built in 1954, designed to accelerate protons to an energy sufficient to produce antiprotons if, as everyone predicted, the antiproton would have the same mass as a proton (but negative charge). This design criterion was similar to the design consideration for the Large Hadron Collider, that of acclerating protons to an energy large enough to create Higgs bosons. Because by 1954 many particles were known to have antiparticle partners, it was not a surprise when antiprotons were indeed produced by the Bevatron.

I’ve listed some major predictions that were successful. However, it seems to me that “postdiction” is more common. For example, no one predicted that the rings of Saturn can be braided. When spacecraft first returned closeups of the rings, scientists were startled to see braided rings. A lot of work went into understanding these unusual structures (the key turned out to be the role of small “shepherd” moons).

*Bruce Sherwood*

]]>

At the other extreme, decays associated with the strong/nuclear interaction typically have a lifetime of about the time required for light to move the distance of a diameter of a proton, about (3 × 10^{–15} m) / (3 × 10^{8} m/s) = 1 × 10^{–23} seconds. Any decay that takes a lot longer than 1 × 10^{–23} seconds is typically an indication that it is associated with the weak interaction (though some “electromagnetic” decays may also have long mean lives).

For example, the positively charged pion decays into a low-energy positive muon (a heavy positron — a heavy anti-electron) and a low-energy neutrino with a mean life of about 25 nanoseconds (25 × 10^{–9} seconds), a time vastly longer than 1 × 10^{–23} seconds, and this is a weak decay. (The energies are low because the pion mass is only slightly larger than the muon mass).

It is pion decay that is the major source of neutrinos made in accelerators. The pions are made at high energy and move at high speed, with the result that the neutrinos emitted in the direction of motion of the pion get thrown forward with high energy. This is the mechanism for producing copious beams of high-energy neutrinos. There are low-energy neutrinos produced by nuclear reactors and by fusion reactions in our Sun.

One can say that there are three classes of fundamental particles: (1) particles made of quarks, called hadrons, including protons, neutrons, and pions, (2) particles that are not made of quarks, called leptons, including electrons, muons, and neutrinos, and (3) “glue” particles that mediate interactions among particles (for example, the photon mediates electromagnetic interactions). There exist purely leptonic interactions and decays, such as muon decay into electron, neutrino, and antineutrino, with a mean life of about 2 microseconds (2 × 10^{–6} seconds, a long time in this scheme of things). There also exist semileptonic weak interactions such as neutron decay, in which the neutron and proton are hadrons but the electron and antineutrinos are leptons. Similarly, in pion decay the pion is a hadron but the muon and neutrino are leptons. There is a beautiful picture that unifies these various kinds of interactions, having to do with the exchange of particles.

The modern quantum field theory view of electron-electron repulsion is that one electron emits a (“virtual”) photon, with a change of energy and momentum by the emitting electron, and this (“virtual”) photon is absorbed by the other electron, so the energy and momentum of this electron also change. The electron is called “virtual” because it is not directly observable and can have a relation between energy and momentum that is not possible for a real photon. Photon exchange is considered to be the fundamental basis for electromagnetic interactions.

Similarly, *remarkably* similarly, weak interactions such as muon or neutron decay can be modeled as the exchange of positive or negative W particles. In this view, the free neutron decays into a proton and a W^{–} (charge is conserved: the neutron has no electric charge, the proton has +1 unit of electric charge, and the W^{–} has -1 unit of electric charge). Next, the W^{–} decays into an electron and an antineutrino (“lepton” number is conserved: the W^{–} has lepton number zero, the electron has lepton number +1, and the antineutrino has lepton number -1).

An even more fundamental picture of neutron decay is that a quark with charge -1/3 in the neutron emits a W^{–} and changes into a quark with charge +2/3, a net change of +1, corresponding to the neutron changing into a proton, but with a change that’s actually associated with the change of just one of its quarks; the other two quarks are mere spectators in the process.

Here is an animation of this picture of neutron decay, and here is a screen shot from the end of the animation:

A key concept is the interaction “vertex”: In electron-electron repulsion, one vertex is the point where one of the electrons emits a photon, and the electron and photon paths diverge. A second vertex is where the other electron absorbs the photon and changes its direction. “Feynman diagrams” are little pictures of these vertex interactions. In neutron decay one vertex is the quark – quark – W^{–}, and a second vertex is W^{–} – electron – antineutrino.

Consider positive pion decay. The positively charged pion with charge +1 consists of a quark with charge +2/3 and an antiquark of charge +1/3. One vertex is quark – antiquark – W^{+}, and the other vertex is W^{+} – muon – neutrino.

Consider the purely leptonic decay of a negative muon. One vertex is muon – W^{–} – neutrino, and the other is W^{–} – electron – antineutrino.

Feynman diagrams consist of interaction vertices with exchanges of virtual particles such as the photon (electromagnetism) or the W (weak interactions). In fact, the first big unification was the recognition that electromagnetism (photon exchange) and weak interactions (W exchange) were basically the same thing, which is called the “electroweak” interaction.

Summary: Weak interactions involve interaction vertices that include the W^{+} or W^{–}, and they are slow compared to strong/nuclear interactions. W’s “couple” to quarks, and they also “couple” to leptons, hence such “semileptonic” phenomena as neutron decay, where one vertex is quark – quark – W^{–} and the other vertex is W^{–} – electron – antineutrino.

I should mention that in addition to the photon, W^{+}, and W^{–}, there is also an electrically neutral Z particle that is exchanged in certain kinds of weak interactions where no change in electric charge occurs at a vertex. Also, when a W decays into an electron and an antineutrino, the antineutrino is an electron-type antineutrino, whereas when a W decays into a (negative) muon and an antineutrino, the antineutrino is a muon-type antineutrino. The neutrinos and antineutrinos associated with positrons and electrons are different from the neutrinos and antineutrinos associated with positive and negative muons.

*Bruce Sherwood*

]]>