 In this lecture, we will learn the following things. We'll learn about the connection between temperature and the constituents of a material body. We'll learn about the precise nature and cause of heat energy. And finally, we'll learn about the radiation of energy from a material body. Now matter is ultimately made from building blocks. For example, a liquid may be made of a large number of atoms or molecules. The atomic theory would not really be accepted as a reliable description of nature until about 1905. But once one adapts the atomic theory as the correct description of material bodies, one is then forced to conclude that the large-scale macroscopic properties of a material object are somehow connected to the microscopic behaviors of the building blocks from which that material is constructed. Now the number of things that are used to construct a material body in the human world, the macroscopic world, is vast. For example, there's the concept of the mole. One mole is the number of atoms in a 12 gram sample of carbon-12. Now experimentally you can work it out and you'll find that one mole's worth of things, anything at all, grains of sand, planets, stars, atoms, anything, is given by a special number known as Avogadro's number. And that number is 6.02 times 10 to the 23 of anything per unit mole. One mole, therefore, is 6.02 times 10 to the 23 things. Heat energy must have a connection to the behavior of the building blocks of matter. After all, if one is depositing a form of energy into a material body, that energy must go somewhere. And we must look to the constituents of the material body to figure out where energy might be going. This helps answer the questions, what is heat energy and where does it go exactly or where does it come from? Let's look at an ideal gas as a laboratory for the connection of macroscopic concepts such as the volume of a material or the temperature of a material and the pressure exerted by a material on its environment to microscopic concepts like the position and velocity of an atom or molecule. Now we're going to focus on ideal gases. I'm going to start with a very simple simulation of an ideal gas. This simulation is provided by the FET demonstration toolkit that's available on the web. And this is a simulator of an ideal gas system. Now to start, I'm going to put one heavy particle of a gas, just one atom or molecule of an ideal gas into the system. Where do the properties of gases like pressure and temperature come from? Well, pressure is force per unit area and so the pressure exerted by an ideal gas on its container and in this case the container is represented by this box outlined here. That pressure comes from the force of the collision of the ideal gas particle with the walls of the container. So for example, we've injected one massive gas particle into the system and we see that it's bouncing around the inside of the container. It collides with the walls of the container and because this is an idealized system we treat it as having perfectly elastic collisions with the walls and the walls do not move. And because of this, this forces the momentum of the particle, the component that's perpendicular to the wall it collides with, to reverse upon collision. So for instance, the particle strikes the bottom wall and we see that it's vertical component reverses. It strikes the right wall and it's right moving component reverses to the left. We see also that because momentum is conserved in this closed and isolated system that the total speed of the particle remains fixed even if its direction changes and that momentum changes are conserved independently in every direction. A collision with a wall to the left or to the right does not change the speed component that is vertical or parallel to that wall. So the origin of the pressure of the gas is the force it exerts due to its momentum change on the walls of the container. I could now instead inject more particles into the system. So now let's start by injecting 50 gas particles into the system. We'll give them a moment to spread out in the container. And we see that while they all come in together as a clump because they didn't all quite have the same velocity, they start not only colliding with the walls of the container but with each other. Now an ideal gas will have elastic collisions with the walls of the container and with itself. And we see that very quickly the gas particles have spread out fairly uniformly throughout the container and they continue to collide. Collisions will exchange momentum between colliding particles. But on average we can see here that the particles are all moving with about the same speed. Some are moving a little faster, some are moving a little slower, but collisions level that out. And we stare at this for a moment and see that these gas particles all appear to have some average amount of speed and a distribution of velocities that sort of spread around that average. So slow moving particles can get struck and become fast moving particles. Fast moving particles can get struck and become slow moving particles. But on average it seems like there's a pretty typical consistent average speed. On average we don't see these particles getting much faster or much slower as a group. While an ideal gas is truly an idealization, there are many gases that are nearly ideal in nature. For instance, all of the noble gases, for example helium or argon, they behave very much like ideal gases under many common conditions. There are even many other substances that under a range of conditions can behave according to the idea of an ideal gas. Careful experimentation on systems that behave in this sort of ideal manner have revealed that there is an empirical law that relates the macroscopic properties of a gas, the number of moles of gas constituents given by the lower case letter N, the volume of the gas given by the capital letter V, the temperature of the gas given by the capital letter T, and the pressure exerted by that gas on its containing volume, for instance the walls of the container that hold it, and that's denoted by the letter capital P. This equation is known as the ideal gas law. Most students learn this in a chemistry course in either high school or college. pV equals nRT. The product of the pressure exerted by a gas and the volume of that gas is equal to the number of moles of that gas times a constant times the temperature of the gas. Now here this constant is denoted capital R. It is known as the ideal gas constant, and its value is 8.314 joules per kelvin per mole. It's named in honor of the French chemist Henri Renaud, therefore the letter R. But since a gas is made from small constituents, albeit a very large number of them, can we connect the microscopic properties of those constituents, their positions in space, the changes in those positions in space with time? Can we connect those to this macroscopic statement about the aggregate behavior of the gas? To connect the microscopic to the macroscopic, let's begin by doing what physicists and chemists in the 1800s did and turn to classical physics. After all, Newton's laws of motion were the only things that they knew to be reliable as describing nature. So why wouldn't you turn to the thing that had been working for a couple of hundred years already? Let's begin with the concept of mass in this ideal gas. Let's define the molar mass of a gas as capital M. This is simply the mass for every mole of this gas. It's given by adding up Avogadro's number of individual constituent masses, which we'll denote as little m. So if each atom or molecule that makes up an ideal gas each has an identical mass, little m, then the molar mass is that little m times Avogadro's number. That gives us the mass per mole of this gas. What about the volume of the gas? Well, to keep things simple, let's consider a nice cubicle space containing our ideal gas. It has a fixed size. It has sides all of length L. And that means that the area of any side of the cubicle space, the box in which we're holding the gas, is given by capital A equals the square of the length of any side. And that also means that the volume is determined, capital V, by the cube of the length of any side. Now pressure is a bit more difficult. Pressure is the sum total of the force, f total, per unit area, exerted by all gas constituents on the walls of the container at any moment in time. An individual gas molecule will occasionally collide with a wall of the volume containing it. That collision will briefly exert a force. That force on that area is the pressure. Now, of course, a gas is made from many constituents. And so it's the sum total of the average number of collisions per some unit of time that cause the pressure on the walls of a vessel. How might we describe this using concepts of motion, Newton's laws, and conservation laws, all from classical physics? Well, let's begin by thinking about a single constituent. Each constituent has a velocity vector at any moment in time with three components, an x-component, a y-component, and a z-component. Now since we're considering an ideal gas, we're talking about elastic collisions between a constituent of mass m and, for instance, the wall of the container along the x-axis. Let's only focus for now on the component of the motion of a gas molecule along the x-axis. Now, during these collisions, the wall doesn't move. And so its velocity before and after the collision is zero. And if you consider a single collision along the x-axis between a gas molecule and the wall that it strikes, and if you conserve kinetic energy and momentum, as would be true of an elastic collision, then you find that the initial momentum of the gas molecule must be given by its mass times its original velocity in the x-direction. And after the collision conserving momentum and kinetic energy, you're forced to conclude that it has the same speed along the x-axis, but it's reversed the direction of its motion. So the final momentum just after the collision with the wall will be negative m times its speed along the x-axis. Now a collision results in a change in momentum for the gas molecule, and a change in momentum is what is known as an impulse in introductory physics. The impulse is just the difference between the final momentum and the initial momentum. And in this case if you crunch the numbers, you find out that if we knew the mass of a gas molecule or atom, and we knew its velocity just before the collision along the x-axis, that the impulse that results from this change in momentum is negative 2 m v x. Now if we knew the time over which the impulse occurs, then we might compute the force that's exerted by just this one constituent on the wall, and we can do that by relating impulse, time, and force using Newton's second law, that the force is equal to the change in momentum divided by the change in time. What is the time between collisions in one dimension? Well, with a specific wall, the time between collisions in one dimension is just the time between when the constituent strikes the wall for the first time, bounces back horizontally across the box, strikes the opposite wall, then bounces back along the x-axis to the first wall, the one on whom we're considering the force. It's that time, the time between the collision with the wall, striking the opposite wall, and returning to the first wall. The time between collisions will be given simply as twice the length of a single wall along the x-axis divided by the speed along the x-axis of that constituent, 2l over vx. Now the force of the gas constituent acting on the wall will be equal in magnitude, but opposite in direction to the force that the wall exerts on the constituent. The pressure is the force that the gas exerts on the container. What we've computed is the force that the gas molecule has experienced by being acted on by the wall. We can use Newton's third law to relate what we have to what we want. We want the force exerted on the wall by this constituent, we have the force exerted on the constituent by the wall, and they're related in Newton's third law by a minus sign. And if we plug in the force that the constituent experiences because of the wall, the minus signs cancel out and we're just left with 2 times m times vx divided by the quantity 2l over vx. And simplifying this, we find that the force experienced by the wall due to this one collision from this one gas molecule is mvx squared divided by l. That is the mass of the gas molecule times its speed along the x direction squared divided by the length of the wall along the x axis. But that's just one gas molecule. Pressure is the sum of all such forces added up across all constituents in the ideal gas and then dividing by the area of the wall in question. So what we really want is the total force exerted by all collisions by gas molecules on the wall in a given time. And we want to divide that by the area of the wall, which is just l squared. Well, the total force will be given by adding up the forces exerted by individual gas molecules with their individual velocities, the component along the x axis. So for instance, there may be Avogadro's number worth of ideal gas constituents. And so we have to look at each one in a time window, delta t, during which these collisions should be considered, that time window is given by 2l over v. And we find that all we have to do is sum up mv1x squared over l cubed plus mv2x squared over l cubed plus mv3x squared over l cubed all the way up to the total number of molecules that make up this gas. Notice that every term in this sum has a common multiplicative factor of m, the mass of the constituent, divided by l cubed, effectively the volume of the container. So we can pull that out in front of the sum, and then we just have to sum over this velocity squared of all of the gas molecules along the x axis. Well, the gas molecules are colliding with each other. We looked at this in a simulation. So they don't all have the exact same horizontal speed at any given time. But they do collide with each other, and they do on average have the same speed over some unit of time. So what we can do is we can approximate this sum by saying that we're going to consider the fact that all of the gas molecules have on average the same horizontal component of velocity, and that sum will just then be given by the total number of molecules, capital M, times the vx squared average, the average of the square of the x component of their velocities. That number is one thing for all of the gas molecules, even if each of them has a slightly different horizontal component of speed because they've been colliding with each other and with the walls. Simplifying this one step further, we can replace big N, the total number of gas molecules, by Avogadro's number, and A, times the number of moles of the gas, little N. That appeared in the ideal gas equation, and that's why we're putting it in here. So the final equation we get is that the pressure exerted by the gas on the wall is just on average given by the mass of each molecule, or atom, divided by the volume of this cubic container, times the number of moles of the gas, times Avogadro's number, which tells you the number of things per mole, times the average of the square of the x component of the velocity. Well, let's see if we can relate that x component to the total speed of each gas molecule on average. On average, the x component of a constituent's squared speed will simply be one-third of its total squared speed. v squared, the speed of a single molecule, will just be given using a variation of the Pythagorean theorem as the sum of the squares of the components, vx squared plus vy squared plus vz squared. So on average, we would expect after any number of collisions that each of those components will be one-third of v squared. So if we plug that in, we take our pressure equation, which is just rewritten here, and we plug in the fact that vx squared average is really just one-third of the average of its total speed squared. We finally arrive at a situation where we can begin to relate microscopic properties like the average speed squared of molecules and their masses to the large-scale properties of the whole gas. For instance, multiplying this equation by the volume cancels out the v in the denominator of the microscopic equation. We wind up with p times v is equal to the mass of each constituent times the number of moles of the constituents times Avogadro's number times the average speed squared divided by three. Well, we can simplify this further by remembering that we defined molar mass, the mass per mole of the ideal gas, and that's just given by the mass of each constituent times Avogadro's number. So that replaces m and na in the equation, and we wind up with the molar mass times the number of moles times the average of the speed squared of a molecule divided by three. Well, by the ideal gas law, pv, which is equal to this thing, is also equal to nRT. Notice that the number of moles of gas appear on the right and left of this equation and cancel out. And we can actually finally solve for the average speed of a single molecule in an ideal gas by rearranging this equation to isolate the average. And when we do that, we find out that this microscopic property of an individual gas molecule, its average speed, is given by a combination of the macroscopic properties of the gas, the square root of three times the gas constant, which is just a number, times the temperature of the gas divided by the molar mass of that gas. The microscopic has been connected to the macroscopic. We see here that classical physics can give you some insights into how the individual constituents of a material have relationships with the macroscopic properties of that material that are easier to measure on the human scale. We can take one final step, and instead of looking at just the speed or average speed of an individual gas molecule, we can consider the average kinetic energy of any single constituent of the gas system. Well, that's just going to be equal to one half times the mass of a constituent times its average speed squared. That's the definition of the kinetic energy of a typical molecule in the gas. Now, from the ideal gas relationship between average speed, temperature, molar mass, and the gas constant, we learn the following. That the average kinetic energy of a single constituent in the gas, which is given by one half m v average squared, can be instead related to the macroscopic properties of the gas, one half times m, times the quantity 3RT divided by the molar mass. Now, this can be further simplified by replacing the molar mass instead with the mass per constituent times Avogadro's number, which is also just a constant. And we notice that the individual constituent masses vanish from this equation. And we are left with the following, that the average kinetic energy of a constituent of an ideal gas is given simply by a number, three halves, times another number, the gas constant divided by Avogadro's number, times a single variable, the temperature of that gas. Now, it turns out that R, the gas constant divided by Avogadro's number, is actually related to another fundamental constant of nature, which is known as Boltzmann's constant. It's written as a lowercase k with a subscript b. And so in the end, we find out that the average kinetic energy of a single constituent of a gas, regardless of the masses of the constituents of that gas, is simply given by three halves times the Boltzmann constant times the temperature of that gas. This is a remarkable observation, a fantastic relationship that something so tiny as the kinetic energy of a typical thing inside of a vast number of gas molecules is related to this singular macroscopic property, temperature, that we can control easily in the macroscopic realm. Now, Boltzmann's constant is given here as 1.381 times 10 to the negative 23 joules per kelvin. It's a very tiny number, which makes sense because the average kinetic energy of a constituent of a large number of gas molecules ought to be a very tiny number even for a standard temperature at room temperature, for instance. Now, when we measure the temperature of an ideal gas, what this tells us is that we are actually measuring, probing in a very direct way, the average kinetic energy of its individual constituents. And this tells us what heat energy is. Heat energy is determined by this thought process to be related to the average kinetic energy of constituents of a material body. That is to say, as one adds heat energy to a system, this raises the average kinetic energy of the constituents. Adding heat, Q, raises T, temperature. And this proportionally results in an increase in the average kinetic energy. Where is the heat going? The heat is going into the kinetic energy of the individual gas molecules. If you want to remove heat from a system, all you have to do is find a way to reduce the average kinetic energy of the constituents of that system. This also allows us to finally understand that a system with no kinetic energy, that is constituents that are holding perfectly still, experiencing no collisions with the walls of their container or with each other because there's no motion at all, that is identified as being the lowest temperature that you can ever have. Zero average kinetic energy for your constituents is zero Kelvin. We finally have a physical understanding at the most basic microscopic levels of a large system as to what it means to achieve zero temperature. Zero temperature, a state of zero heat energy, is also a state of zero average kinetic energy for the constituents of that system. So this raises an interesting question then. How do you transfer heat energy either to or from a system? Well, there are many ways to do this and I'm going to focus on three quite broad established mechanisms for transferring heat energy from a system because ultimately I only really want to focus on one of them. So let's consider cooling. Heating will just be the reverse of any of the things that I say here. Let's begin with the mechanism of conduction. Conduction is when you place a second system perhaps at a lower temperature if we wish to cool the first system in physical contact with the first system. Think of two cubes of metal at different temperatures. We want to cool one of those blocks of metal so we take another system that's even cooler and we press them together so that their two faces of the material are physically touching each other. At that interface, at that contact space between the two materials collisions are going to begin occurring between the atoms or molecules of one system and the atoms or molecules of the other system. This creates an arena in which collisions occur transferring kinetic energy from one system on average to the other. What you'll find is that higher kinetic energy constituents are going to typically lose some kinetic energy to the slower moving constituents at the interface of the other system. Of course, at the interface of the other system those constituents will then start having more collisions with the things inside the system and that's how heat energy is transferred by conduction through a system. It's all collisions. This decreases the temperature of the hot system and increases the temperature of the cold system until such time as the temperatures of the two systems reach a new equilibrium position, T1 equals T2. This will occur typically when the temperature of the hotter system is lowered down and the temperature of the cooler system is raised up and you finally reach a point where they both have the same temperature and they stop transferring heat energy. They on average have the same kinetic energy for all their constituents. No more transfer can occur. Then there is convection. In convection you pass a fluid like a gas or a liquid across or around another system. So if we want to take a system and cool it we might blow air over it or push water across it in some kind of current. Collisions at the boundary of your system between the constituents of your system and the constituents of the fluid will transfer kinetic energy on average to the fluid. The fluid, if it's cooler will have lower kinetic energy constituents and collisions will tend to favor increasing the kinetic energy of the cooler system's constituents. This ultimately cools your target system, system 1 by lowering the average kinetic energy of that system. Finally there is radiation. Radiation is a process by which constituents lose energy by giving it up in the form of radiation of light. For instance, you might be familiar with the fact that you can stretch your hand out several centimeters, inches maybe even up to a few feet away from a hot cooking pan on the stove. And even though you are not making physical contact with that and even though the air is very still in the room around you you feel something being transferred to your hand. You would say that you can feel from a distance that the pan is hot. Well that's because it's radiating, typically at the infrared. And that infrared radiation which you can't see with your eye but which you can feel with your skin will be absorbed by your skin. Radiation requires no physical contact between a system and the environment. In fact, if you took all the air out of the room and stuck your hand out in that environment you would nonetheless feel heat being transferred to your hand by radiation. The magnetic radiation requires no medium to travel and so even evacuating the room of air will still lead to a cooling of the pan in this case by the radiation of infrared light. Now radiation has the effect of carrying kinetic energy away from a system and giving it to the environment writ large around it even without physical contact. Radiation is what I'm going to focus on for the rest of this lecture. It's an interesting phenomenon because it is an interface between mechanics and electromagnetism. And you can already begin to see that since we got ourselves into trouble thinking about motion and the laws of electromagnetism and the laws of mechanics that a place like this, heat energy and radiation is another similar interface of classical mechanical view of the universe with the electromagnetic laws of nature and inconsistencies may arise if you overly trust the mechanical laws of nature. There is a mathematical relationship that has been determined by experiment in the late 1800s and early 1900s between the energy that is emitted or absorbed by a heated material body and the temperature of that body. This was determined empirically by Josef Stefan to be the following that the power radiated or absorbed by a body is to say the change in heat energy per unit change in time is given by the product of four numbers. Sigma, which is a constant of nature known as the Stefan-Boltzmann constant whose value is 5.670 times 10 to the minus eighth watts per meter squared per Kelvin to the fourth. It's not a bad number to remember because it's got 5, 6, 7, 8 in it. I find that handy for remembering this number in the pinch. Now, the Stefan-Boltzmann constant is multiplied by another number which is this curly lowercase Greek epsilon. Epsilon is the emissivity of the surface of a body and it ranges between zero, no emission, and one, perfect emission. You can see that a body with zero emissivity will emit no power in the form of radiation because the right side of this equation will always be zero. On the other hand, a body with perfect emission will maximally emit radiation given by the product of the other numbers, the Stefan-Boltzmann constant, the surface area A of the body and the temperature of the body raised to the fourth power. Note that all material bodies above zero Kelvin radiate energy in the form of electromagnetic radiation. You and I sitting here right now at 98 degrees Fahrenheit which is the typical human body temperature are radiating light away from our bodies. We just can't see it and we can play around and figure out what wavelength it is as an exercise in class. A perfect emitter with emissivity of one is also known as a black body. It's a very special kind of object. It is a system that absorbs all incident radiation and it can subsequently re-emit its own radiation with perfect emissivity. Black bodies are a special laboratory for testing the interface of the laws of mechanics, the movement of the constituents and the laws of motion that describe the allowed states of motion of that material at its smallest level and electromagnetic radiation, the emission of light. Now, before I show you an example of how classical physics when applied to the question of radiation got it wrong, I want to define for you a very useful concept and that is the power emitted per unit wavelength in a radiation situation. This is known as the spectral radiance. Now, in a situation where an amount of energy say delta Q is radiated away by a body in some period of time, delta T, it is actually fairly typical to ask the following question to really drill down into a question about the amount of energy within a certain range of wavelengths or frequencies of the emitted radiation. In other words, if I consider a range of the radiation with a minimum wavelength lambda and a maximum wavelength that's just a little bit higher than that, lambda plus delta lambda where delta lambda could be a very tiny amount, how much energy per unit time is radiated by wavelengths in that range. And asking this question is answered by a special kind of function known as the spectral radiance. Now, it's often denoted by various letters I'm going to use the capital letter B and I'm going to make it a function of lambda, the wavelength, explicitly to emphasize the fact that it is answering a question per unit wavelength. This is the energy radiated per unit time per unit wavelength. I could have also alternatively written B in terms of the frequency F because frequency and wavelength are related to the speed of light for electromagnetic radiation. But I'm going to use B as a function of lambda. If you want to know the power radiated around a specific wavelength, then you need to pick a small range around that wavelength and compute the product. You might choose a specific lambda. And then because this is defined over a small range of lambda to lambda plus delta lambda, you need to multiply the spectral radiance, which is a function of lambda, times the window around which you are trying to compute the amount of power radiated, delta lambda, and that will return the power emitted around that wavelength. Now, that would be a sort of discreet way of thinking about it. If you have a well-defined continuous function, a function of lambda that varies continuously as lambda, representing this spectral radiance B, then you can just integrate. You can use integral calculus in a range to get the answer you desire. So for example, if I want to know how much power is emitted between two wavelengths, lambda 1 and lambda 2, I can simply take the product of B and D lambda and integrate that product from lambda 1 to lambda 2. And if B is a well-defined function, I can do the integral. It may not be pretty, but I can get a function that answers the question and gives me the power radiated in that range of wavelengths. Now, with that introduction in mind, let's take a look at a classical physics attempt to predict the amount of energy emitted per unit time about a given lambda. This was worked out in the early 1900s and answers the question, how much power is emitted in, say, the ultraviolet range around 240 nanometers in some window around 240 nanometers? How much power is emitted in the range of red light around, say, 740 nanometers in some window around that? Answering that question in little steps through the electromagnetic spectrum will give you a picture of how power is distributed as a function of wavelength in the emitted radiation. Now, the classical version of this is known as the Rayleigh-Jeans law and it's from 1905. And so, again, you have to start from the spectral radiance function, the power per unit wavelength. That is this quantity here in the Rayleigh-Jeans law, 8 pi times A, the surface area of the object, times C, the speed of light, times the Boltzmann constant, times the temperature of the object, divided by lambda to the fourth. And if you check the units of that particular fraction, you'll see that it is joules per second per meter, so per unit wavelength. If you then want to know in a small window around the target wavelength, lambda how much power is emitted, you need to multiply that by the size of the window, and that will then answer the question about how much power is emitted around that wavelength in a window about the wavelength lambda. So, for example, this tells us that for, say, a spherical body that's heated to a certain temperature T, and that body has a certain surface area A, the shorter the wavelength of the radiation you consider being emitted from the body, the more and more power is emitted around that wavelength. If true, this would be a catastrophic feature of nature. So, for example, consider a small sphere of metal, or something like that. You make it out of a very good material, and it's got a surface area of just one meter squared, and it's got an emissivity of one. If you heat that to 6,000 Kelvin, and just for reference, a very modest small propane torch can easily heat something to 3,000 Kelvin, you would emit about 10 to the 16 watts, that is, joules per second alone, in dangerous ultraviolet radiation, for instance with a wavelength of 250 nanometers. That is easily lethal to a living organism. To give you a point of reference, you can buy easily on Amazon or at other online vendors a sanitizing wand. A sanitizing wand emits 4 watts of radiation power in the form of ultraviolet, specifically ultraviolet C, which has a wavelength which kills bacteria. If it can kill bacteria, it can do significant damage to other kinds of living cells, including the cells of the human body. You should never expose your body to UVC if you can avoid it, because it causes damage to DNA, and this can lead to the formation of cancers. 10 to the 16th watts of UVC would be extremely dangerous if not lethal and all from a small heated sphere at 6,000 Kelvin, well that seems ludicrous and it is ludicrous. If you actually go and measure the amount of power emitted at a given wavelength, it doesn't shoot off to infinity as lambda goes to 0. This is just not what is observed in reality and yet, it is a by-product of thinking of classical physics the marriage of Newton's mechanics with electromagnetism. Let me show you a graph. I don't want you to worry too much about what the axes mean. I'm going to describe them in an oversimplified manner. The vertical axis tells you how much energy is emitted per unit time, per unit area, and per unit solid angle, so at some chunk of total space. For a given frequency of radiation you're considering. So the frequencies are on the horizontal axis. High frequency corresponds to the short wavelength. Ultraviolet radiation would have a shorter wavelength. X-rays would have a very short wavelength and so forth. On the other hand, long wavelengths are down here at low frequencies so infrared and red they tend to have very small frequencies and correspondingly very large wavelengths. The blue curve, which not only comports with reality but was predicted in a mathematical exercise by a physicist named Max Plunk that one is what nature should look like and in fact is what nature does look like if you heat a black body to 5,800 Kelvin and look at the so-called spectrum of emitted power for a given frequency. The blue curve is what nature looks like. This yellow dotted curve is the prediction of the Rayleigh-Jeans law and comes nowhere near reality. It arguably maybe does an okay job for the very lowest frequencies, the very longest wavelengths of radiation from a body, maybe a human body would be accurately described by the Rayleigh-Jeans law, but the sun on the other hand, which has a temperature of about 5,800 Kelvin also behaves like a black body and is nowhere near described correctly by the Rayleigh-Jeans law. Now, another physicist named Wilhelm Wien figured out in 1896 his own version of this prediction and that's the pink curve, and you'll notice that Wien's law, as it's known does a pretty good job of describing the radiation at the highest frequencies but does an abysmal job of describing radiation at low frequencies. Plunk's law, however, nails it. Max Plunk's law as he derived it in the early 1900s was the cornerstone of the correct description of the radiation from heated matter. So you can see here again a place where there's a breakdown between classical thinking motivated by the things that we learn in introductory physics, the things that are from the familiar macroscopic world, applied to the world of the very small in this case, the individual constituents of a heated body of matter. There's a breakdown here, and a breakdown is an opportunity to make sense of the correct laws of nature. Max Plunk figured it out even where Wien and Rayleigh-Jeans could not. So to review, in this lecture we have learned the following things. We've learned about the connection between temperature and the constituents of a material body. We've explored the precise nature and cause of heat energy, the fact that heat energy is related to the average kinetic energy of the constituents of material, like an ideal gas, and that that is directly related to the temperature of the macroscopic body of that gas. We've considered ways of transferring energy to and from objects, and we've looked specifically at the emanation of electromagnetic energy in the form of light from a heated body. We've looked at some of the laws that were either derived or determined to govern that kind of radiation of energy, and we've seen that in places where classical physics, mechanics, Newton's laws were combined with electromagnetism to predict the radiation from a heated body, a special kind of body, a black body, is a total breakdown compared to reality. In the next phase of the course, we're going to take this breakdown as a launching point for a deeper understanding of nature. We're going to transition from the very fast to the very small and begin to explore the origins of quantum physics.