 Welcome to Quantum Mechanics 3, Probability and Uncertainty. In the first video we saw that in order to correctly describe thermal radiation's variation of intensity with frequency, Max Planck reluctantly proposed the radical idea that light energy comes in discrete chunks. At a frequency new, the energy quantum is E equals Planck's constant H times nu. In the second video we presented Einstein's arguments that these quanta represented real particles of light, what we now call photons. The discrete photon concept didn't overthrow the wave theory of light, but rather combined with it to form the counter-intuitive picture of wave particle duality. Light energy arrives in discrete photons, seemingly a random, but with a probability determined by the classical wave intensity. It's only after a great many discrete photons have arrived that the probability distribution becomes apparent as a continuous wave pattern. Now it's quite reasonable to be very skeptical of this idea, because if ultimately true, it fundamentally undermines a key foundation of classical physics, determinism, the idea that the initial conditions of an experiment completely determine its outcome, and two identical experiments with identical initial conditions will give identical results. But the double slit experiment seems to violate classical determinism. In principle the electromagnetic field remains constant throughout the experiment, yet under these identical conditions different photons arrive at different positions on the output screen. Unless some further process is at work there seems to be no way to account for this except to assume that at least some natural processes contain a fundamentally random aspect. Einstein was one of the most outspoken critics of this point of view, even though his particles of light theory was a primary reason physicists found themselves in this dilemma in the first place. He famously voiced his opinion that, God does not play dice. Indeed, there is no place for chance in classical determinism, which we'll express as follows. The state of a physical system can, at least in principle, be specified with perfect precision at some initial time. For instance, we assume that at a given time, particle one has an exact mass, location, and velocity. Likewise, particle two, and so on. Then the state of the system evolves according to physical laws, such as the law of gravity, such that it's stated some final time is exactly determined by those laws and the initial state. The precision of the mathematical description of this process requires that the same initial state will always produce the same final state. Now, of course, if the initial state is uncertain, then the final state will also be. If we don't know precisely the initial position of a billiard ball, then, even though its motion is described by exact physical laws, its subsequent position on the billiard table will also be uncertain. There are even so-called chaotic systems where the uncertainty grows exponentially with time. As a result over sufficiently long times, the final state cannot be known if there is any uncertainty at all in the initial state. We may, however, be able to determine the probability of a given final state. Adding a round central bumper to our billiard table creates a chaotic system. Here we have a blue ball and a sequence of red balls displaced from the blue ball while amounts from one part in one hundredth of the width of the table to one part in ten to the sixteenth, an astronomically minuscule amount. All balls start with identical velocities. Each collision with a round bumper peels off another two orders of magnitude less than certainty. And in a short while the balls are seemingly randomly distributed around the table. Yet this is a fully deterministic system. If two balls have exactly the same initial conditions, they will follow exactly the same path for all time. There's nothing random in the physics of this system. Another example of a deterministic system that produces unpredictable results is the bean machine, or golden board, where balls fall through an array of round bumpers and into a set of bins. As the bins fill up, the probabilities of the possible final states become apparent. Obviously, each ball will end up in a single bin, but we can't know in advance which bin. We can only say that there is a certain probability of landing in the center bin, and certain smaller probabilities of landing in bins further from the center. So, if asked to predict where a given ball will end up, the best we can do is to refer to this probability distribution. The behavior of photons is suggestive of balls falling through the golden board. Each photon ends up at a particular place on the output screen, but we can only talk about probabilities in advance of the actual events. Now, maybe God doesn't play dice, and photons are particles that follow deterministic, yet chaotic paths, as in the golden board. It's just that the, quote, pins are hidden from us. We'll come back to this question in a future video. For now, we'll just note that for whatever ultimate reason, without a new theory, many of our physical predictions from quantum theory will necessarily have to be in terms of probabilities. We might still argue that photons could be made to follow well-defined paths. If the wave intensity gives the probability of finding a photon somewhere, let's just make that intensity, hence probability, zero everywhere except along some well-defined path. For instance, in a laser beam, photon positions are constrained in two dimensions, and ideally there are no photons outside the beam diameter, and if we rapidly turn the laser on and off, we create pulses that limit position in the third dimension. Then by following a pulse, we can map out the path of one or more photons. Let's think about this in some detail, because it will bring us face to face with a fundamental principle of quantum physics. We start by visualizing a laser beam in which photons are confined to a cylindrical region. Then we switch the laser rapidly on and off to generate a short pulse. Any photons will necessarily be confined to this pulse. So it would seem that by making the pulse small enough and following it through space, we could define a precise photon path. Thinking about the laser beam as a wave, it will have peaks and valleys that in principle go on forever. We can think of a pulse as a short segment of this pattern, or equivalently as an infinite wave in which we've squeezed the peaks and valleys flat except in a given region. The concept of a pulse or a squeezed wave comes up in many areas of physics and engineering, including speech analysis, wireless networks, and the flow of heat. It was in this last application that a fellow named Joseph Foyer discovered a remarkable fact. Any complicated function, including our pulses, can be built up as a sum of pure frequencies, pure sine waves. Here's a set of sine waves, each one with one more wiggle than the last. According to Foyer, we should be able to build an arbitrary pulse from combinations of these. In the following figures, we show three curves, blue, red, and green. The blue curve is the pulse we want to end up with. The green curve is a single sine wave. Its frequency, n, the number of bumps it has across the figure, is shown at the top. The red curve is the sum of all the green curves shown so far, and it shows our progress toward building the blue curve. We start with a relatively wide pulse. Now, using Foyer's theory, we start adding sine waves together to build towards this blue pulse. Frequency zero, nothing, 1, no, 2, no, 3, no, 4, no, 5, no, 6, no, 7, no, 8, something, 9, yes, 10, yes, 11, yes, 12, and we've built our pulse. We don't need any other sine waves. Let's look at the pulse and all the sine wave components at once. We can see that it only took a few of these components to build our pulse. Now we'll make our pulse half as wide, and again use Foyer's theory to build this out of pure frequencies. Zero, nothing, 1, no, 2, no, 3, no, 4, no, 5, no, 6, something, 7, yes, 8, yes, 9, yes, 10, yes, 11, yes, 12, yes, 13, yes, 14, and we've got our pulse. Looking at all the components in this case, it's clear that we've needed more sine waves to build up this shorter pulse. Let's make a pulse half as wide again. That's a fourth the width of our original pulse. Zero, nothing, 1, nothing, 2, something, 3, yes, 4, yes, 5, yes, 6, yes, 7, yes, 8, yes, 9, yes, 10, yes, 11, yes, 12, yes, 13, yes, 14, yes, 15, yes, 16, yes, 17, yes, 18, and we've got our pulse. It clearly took an even wider range of sine waves to build up this narrower pulse. The wide pulse has only a small range of frequency components. A pulse half as wide needs about twice as many frequencies. And a pulse 1 fourth as wide needs about four times as many frequencies. In general, a narrow pulse requires a wide range of frequencies, while a wide pulse requires only a narrow range of frequencies. We can actually hear this principle in action. If we play this relatively wide audio pulse repeatedly, we hear only a low rumbling. If we narrow the pulse, we hear higher pitch tones. And with an even narrower pulse, we hear even higher frequencies. Fourier's theory shows us that a pulse of width delta t contains a range of frequencies delta nu, such that delta nu is inversely related to delta t. Equivalently, delta nu times delta t equals 1. This fundamental mathematical time frequency ambiguity applies to any phenomenon involving waves. Could be music, bird chirps, radio transmissions, ultrasound images and on and on and on. To see the consequences for light and photons, we start with Planck's quantum relation for photon energy. Energy E equals Planck's constant H times frequency nu. The expression for photon momentum, P equals energy over the speed of light C. Additionally, a wave traveling at the speed of light C for a time t covers a distance x equals C times t. Now multiply both sides of the time frequency ambiguity formula by Planck's constant. The result is one form of the so-called uncertainty principle. Planck's constant times a range of frequencies delta nu corresponds to a range of energies delta E. If this represents the uncertainty with which we know a photon's energy and delta t the uncertainty with which we know, say, the time the photon is emitted from a laser, then the product of these uncertainties can never be smaller than Planck's constant. Now divide and multiply by the speed of light. This gives us the second form of the uncertainty principle. Uncertainty and energy over the speed of light is delta P, the uncertainty in momentum. The speed of light times uncertainty in time is delta X, the uncertainty in position. And delta P times delta X can never be less than Planck's constant. The reason the equal signs are replaced by greater than or equal to signs is because while our knowledge can never be more precise than these limits, it can be less precise. The uncertainty principle represents the ultimate limits of what it is possible for us to know about a photon, regardless of any technology we may develop. The uncertainty principle tells us why it is impossible to speak of a photon as following a definite path. If at a given time we know the photon's location to with uncertainties delta X in the horizontal direction and delta Y in the vertical direction, then the photon's momentum will have uncertainties delta PX and delta PY in those directions. That would mean that the photon could be moving directly to the right or it could be moving with an upward component or with a downward component. So the more we constrain a photon's location, the less constrained is its momentum, hence its direction of travel. It's impossible to say a photon is at a definite position traveling with a definite momentum. Indeed the uncertainty in momentum becomes as large as the momentum itself when photon position is constrained to the scale of a wavelength. The uncertainty principle goes hand in hand with the phenomenon of diffraction we looked at in a previous video. A laser beam spreads very little because the width of the beam is actually quite large, much much larger than a wavelength. The more we try to constrain photon position by passing light through a tiny pinhole, the more uncertain is photon momentum and this shows up in the spreading of the light beam into a diffraction pattern.