 We have been analyzing the problem where for Hamiltonian H hat zero, the Schrodinger equation is satisfied by stationary states with spatial part phi n and time dependence e to the minus i omega n t. But with the addition of a perturbation, an interaction Hamiltonian, H hat interaction, the Schrodinger equation is not readily solvable. Our approach is to represent the solution as a superposition of the stationary states of the original unperturbed problem. The coefficients Cn are functions of time which allows the system to dynamically evolve. We developed the infinite set of couple differential equations satisfied by these coefficients. Cf dot of t equals minus i sum over n Cn of t e to the i omega f minus omega n t times the fn matrix element of the interaction Hamiltonian. This problem is too difficult to solve exactly, which motivates us to apply perturbation theory to develop approximate solutions. Before doing that, let's introduce perturbation theory by looking at a classical system of two unit masses, each attached to a spring with spring constant omega squared. The blue mass has position x1, the red mass position x2, and the masses are constrained to move only in the vertical direction. Each mass independently satisfies the harmonic oscillator differential equation, x double dot plus omega squared x equals zero. If we give the blue mass an initial momentum, it oscillates sinusoidally. Here the blue line represents the stretched spring attached to the blue mass. Since there is no interaction mechanism between the masses, the red mass remains at rest. For this, what we will call the unperturbed problem. Our hope is that the readily obtainable solution to this simpler problem can be used as the starting point to develop a solution to the more difficult perturbed problem. Now, let's add a perturbation, a spring between the masses with spring constant epsilon times omega squared to create an interaction force proportional to the difference of the mass positions. We place these terms on the right hand side of the differential equations. This causes the two equations to become coupled with both equations depending on both positions, and it becomes necessary to solve the entire system simultaneously. Although this particular system is simple enough to solve exactly, in cases with a larger number of unknowns and or more complicated equations, the problem becomes intractable. The added spring, shown as a thin green line, has an epsilon value of 1%. Let's look at the exact solution for this case. We again give the blue mass an initial momentum while the red mass starts at rest. The added perturbing force only slightly modifies the large motion of the blue mass as it gradually transfers energy to the red mass. Switching between graphs of the unperturbed and perturbed solutions, we see that the change in the motion of the blue ball is relatively small. And although its amplitude is steadily increasing, over the time period shown, the red ball's motion remains small compared to that of the blue ball. This motivates us to call the solution of the unperturbed problem our zeroth order solution. We designate this with a zero superscript. Initial conditions of our problem are accounted for by the initial conditions of this approximate solution. Then we compute corrections to the approximation. The equations for the first order correction designated with the one superscript are formed by substituting the known zeroth order solution in the right-hand side. This decouples the equations because now in each equation the only unknown is the corresponding component of the correction. Then to get the second order correction, we substitute the solve for first order correction on the right-hand side and so on for as many corrections as we desire. For each correction we use zero initial conditions. Finally we form our approximate solution as the sum of the zeroth order solution and all computed corrections. In principle by calculating enough corrections we can make this approximation arbitrarily close to the exact solution. In this way perturbation theory replaces the single original problem that requires simultaneous solution of all unknowns with a series of simpler tractable problems that can be solved one after another and combined until a desired level of accuracy is achieved. Another way to look at the process is the substitute these sums into the first equation above and group terms by correction order. If we take the first terms on both sides of the equation we simply get a restatement of the original equation. Let's add a leading zero term to the right-hand side which of course does not change the equation. Now grouping the first terms on both sides gives us the unperturbed problem which we are assuming is directly solvable. The second terms give us the equations for the first order correction where the right-hand side consists of already solved four expressions and so on. If each one of this infinite set of equations is solved then the original equation will be satisfied exactly. In practice this process is useful when higher order corrections decrease rapidly enough that a terminated sum gives an accurate approximation to the exact solution. For our two mass problem here is a graph of the zeroth order solution and the first three corrections. The corrections rapidly decrease in amplitude telling us that with only a few corrections we will obtain a very accurate approximation to the exact solution. Here's the exact solution. The thick cyan line is x1 of t and the thick magenta line is x2 of t. The thin blue and red lines show the zeroth order solution. Adding the first order correction or approximate solution improves dramatically. Adding the second order correction the agreement is even better. And adding the third order correction our approximate solution is essentially indistinguishable from the exact solution. Now back to our quantum mechanical problem. Let's take as our zeroth order solution the ife stationary state of the unperturbed problem. This is described by ci of t equal to one and all the other coefficients equal to zero. We plug this into the right hand side of the above equation. To get c dot f of t equals minus i e to the i omega fi t times the fi matrix element. Here omega fi is omega f minus omega i. Here's that equation again. Assuming the fi matrix element does not depend on time the solution corresponding to the initial value cf of 0 equals 0 is one minus e to the i omega fi t over omega fi times the fi matrix element. This is the first order correction to our solution. The magnitude squared of this coefficient represents the probability that the system is in state f after time t. With a little algebra it can be put in the form magnitude squared of the fi matrix element times four times the square of sine omega fi t over two over omega fi. Let's plot the sine squared factor as a function of omega fi for various t values starting with t equals zero. Note that in the graph labels to reduce clutter we drop the fi subscript and simply write omega. For small t the function is relatively wide with a low peak value. As time goes on the peak grows and the function becomes narrower. Omega fi the frequency difference between the final and initial states is proportional to the energy difference of those states. Our plot tells us that over a small time interval the perturbation can excite final states that differ from the initial state by a wide range of energies. This is a form of the uncertainty principle. Delta e delta t is greater than or equal to Planck's constant. For a large time interval the final states are limited to a narrower range of frequencies again consistent with the uncertainty principle. As t grows arbitrarily large the final state energy must be arbitrarily close to the initial state energy which is a statement of the conservation of energy over long time periods. The area under the blue curve is pi over 2t. Combined with our original factor of 4 this gives a factor of 2 pi t. So for sufficiently large t the probability the system is in state f equals 2 pi t times the magnitude squared of m fi the fi matrix element times the Dirac delta function of omega f minus omega i. The delta function is the limiting case of the blue curve for large t and unit area. A very narrow highly peaked function centered at argument zero. Our results are summed up in Fermi's golden rule. If a quantum system with Hamiltonian h hat zero has stationary states phi n e to the minus i omega n t then the Hamiltonian h hat zero plus h hat interaction will cause an initial state i to transition to a final state f at the rate of 2 pi magnitude squared of m fi times delta of omega f minus omega i. Where delta of omega the Dirac delta function is zero for all non-zero omega values and infinite for omega equals zero such that the area under the curve is one. For the first order analysis we performed m fi equals h hat interaction operating on the state i projected on the state f. So far we've calculated the first order perturbation correction as with the masses on spring's example higher order calculations can provide more accurate approximate solutions. To get the first order correction we substituted the zeroth order solution into the right hand side of our differential equation. To get the second order correction we substitute the first order solution into the right hand side. This leads to products of two matrix elements one from the initial to the nth state and one from the nth state to the final state. We can think of this as describing a process where the initial state transitions to an intermediate state and then that state transitions to the final state. After quite a bit of algebra we find that Fermi's golden rule still holds with m fi modified to be the matrix element between the initial and final states plus the sum over intermediate states of the matrix element between the initial and intermediate state times the matrix element between the intermediate and final state divided by the energy difference between the initial and intermediate states. Here we use summation variable capital I to emphasize the idea of intermediate states. Schematically we can picture the stationary states of the unperturbed system arranged in a row. Then the first order term represents a direct transition from initial to final state. Each term in the second order correction represents a transition from initial to intermediate state followed by a transition from intermediate to final state. If the matrix elements of the interaction Hamiltonian are small then the second order terms will be much smaller than the first order term. Assuming the system spends a long time in the initial state followed by a long time in the final state there is no uncertainty principle wiggle room for these energies and the initial and final energies must be equal. However intermediate states are occupied only temporarily so the uncertainty principle allows those energies to differ from the initial and final energy. Second order transitions raise an interesting possibility. It seems that the initial state could transition to an intermediate state then the intermediate state could transition back to the original state. Early quantum field theory researchers found that calculations involving these type of loop processes sometimes produced infinite results. The difficulty is that there are situations especially when the intermediate states involve photons where the number of intermediate states is infinite. So even if the individual terms in the sum are small summing an infinite number of these produces an infinite result. Taming these infinities was for many years a central challenge to the success of quantum field theory.