 Good afternoon. My name is John Cardi, and it's a great pleasure to welcome you all here today. It's a great pleasure and indeed a great honor to be able to introduce Professor Leo Haddanoff. He is truly one of the giants of the field of statistical ethics. In the work for which he is perhaps best known in the late 1960s, he laid the conceptual basis for our modern understanding of what is known as scaling for systems close to a radical point or a transition, thus providing a unifying explanation for diverse sets of experimental data on magnets and lowids, as well as one of the key inputs for the subsequent development of the renormalization group approach in the early 1970s, work for which he shared the Wolf Foundation Prize in 1980. Although others went on to receive the Nobel Prize for this work, it is Haddanoff's simple yet profound insight, which most students of modern statistical physics now learn when being introduced to the subject. He continued to make important contributions to that subject, but after about 1980, he began to focus on another kind of scaling in fluid flows and other complex dynamical systems. Once again, he was at the forefront of a revolution in understanding these problems. I recall that it was in one of his talks that I first heard the term fractal used in a physics rather than a mathematics context. He also emphasized and questioned, critically, the importance of the role of large-scale computer simulations in understanding these difficult problems. Professor Haddanoff received his doctorate from Harvard University. After post-doctoral studies, he joined the University of Illinois, where, in addition to inventing Haddanoff's scaling, as I have already mentioned, he co-wrote a seminal text on quantum statistical mechanics. In 1978, after a spell at Brown University working on problems of urban growth, he moved to the University of Chicago, where he was for a long time the John D. and Catherine T. MacArthur Distinguished Service Professor of Physics and Mathematics. He currently also holds a distinguished research chair at the Perimeter Institute. Professor Haddanoff is a member of the National Academy of Sciences and a fellow of the American Academy of Arts and Sciences. Among his many previous honors and awards, apart from the wolf prize, one can pick out the Buckley and Onsager prizes of the American Physical Society, the Boltzmann Medal, the Grande Medaille d'Or of the French Academy of Sciences, the US National Medal of Science, the Onsager Medal, and the Lorenz Medal. Throughout his career, Leo has shown himself always open to new ideas, as well as not being bashful in airing his own opinions about the more philosophical and social aspects of science in general. I look forward to hearing some of these in his talk today, entitled Innovation and Achievement in Theoretical Physics. Leo Haddanoff. Thank you, John, for that very flattering introduction. Thank you all for coming out for this occasion, which is a wonderful occasion for me. I'm going to come back to this movie after a bit, but let's start at the beginning. I'm going to talk about innovations in statistical physics, the outline of the talk, which I will read so that you can hear and appreciate my original accent, is in 1965 to 1971, a group of people named here, Cyril Dome, Michael Fisher, Benjamin Wittem, Padyshinsky, Bokrovsky, and Kenneth Wilson, myself included, formulated and perfected a new approach to physics problems, which eventually came to be known under the names of scaling, universality, and renormalization. This worked for on the basis of a wide variety of theories, ranging from particle physics and relativity through condensed metaphysics and into economics and biology. One of my daughters here present chided me on another occasion, not entirely different from this one, for being too modest. I should try to avoid that this time. This work, that is the work on scaling universality and renormalization, was of transcendental beauty and of considerable intellectual importance. This left me with a personal problem. What next? Constructing the answer to that question would dominate the next 45 years of my professional life. The short answers are that I would try to help in finding and constructing new fields of science. I work on the science and society borderline. And I try to provide helpful and constructive criticism of scientific and technical work. But let me start with the accomplishments in the early period. In the period 1966 through 1968, I helped develop a new method of understanding physical problems. I'm going to spend the next 15 minutes telling you about this work focusing upon my own contributions. First, where did I start out? I started out thinking about matter, the stuff around us. Liquid, solid, gaseous matter. Here are three phases of water. Qualitative, different kinds of behavior are called phases of matter as in the familiar behavior of water. You can stand on ice, but not on liquid water. And you can hardly feel or see water in its gaseous form as water vapor. I was trying to solve a particular problem involving what is called phase transitions that change from one of these phases to another, for example, the freezing or boiling of water. A focus on the boiling, there's another picture of the phases of water, lovely to leave out, liquid, solid, gaseous phase. Boiling is very familiar to every housewife, cook, and chemist. Boiling is the change of a fluid from a high density state called a liquid to a low density state called a gas. This process is controlled by two physical parameters, temperature and pressure. At ordinary temperatures, liquid water has a very high density, and gaseous water are very low density. This boiling produces a considerable change in density. As one raises the temperature above room temperature, the jump in density between these two phases gets smaller and smaller, and still at a very high temperature and a very high pressure, the jump goes to zero. At that point, the flow fluid does not know what phase it should have. Very large regions of both phases appear. At this point, we're at what is called the critical point, and that's what I'm going to be talking about for a while. Come back to me. My story begins in 1965, when I'm 28 years old and working at the Cavendish Lab at Cambridge University. I'm an ambitious young scientist. I know that boiling is reasonably well understood, but there's no workable understanding of what are the critical points, nor of the critical points that occur in many other phase transitions. I try to build this understanding by doing what theoretical scientists have done since the time of Newton. I choose to study in detail a mathematical example. Here I looked at a particularly simple phase transition problem called the Ising model. The Ising model is very simple. It has regions of low density here marked in green and regions of high density here marked in red. I think of this as a real fluid and the jump between the high density and low density regions as a boiling transition. And the forces within the fluid try to pull high density regions towards high density regions and low density regions towards low density regions. And at low temperatures, these forces hold the two phases separate from one another and allow a boiling transition. The slide starts out with the same picture and says roughly the same thing. The jump between the high density region and the low density region is called boiling. At higher temperatures, there's a random arrangement of the different regions and no phase transition. However, at the critical point, you get a complicated melange in which there are mixtures of high density and low density regions of all different sizes, small regions and big regions and bigger regions yet. In fact, all possible sizes are represented. This is what I'm going to try to understand. As I work, I start out by doing the conventional thing by using the preexisting theory of the Ising model due to Lars Ansager to calculate an aspect of the behavior of the Ising system, but specifically how densities in different places are correlated with one another. I'm the first to do this calculation, so I have some knowledge that is mine alone. At this point, everything I have done is rather conventional. I know rather a lot about the Ising model from my work and the work of others, but all this knowledge is in disconnected pieces. Then, all in one week in the year 1965, I noticed that there are lots of students or student age people in the audience, so I should explain. I worked for a year and a half doing what's in this paragraph. I worked for a week doing what's in that paragraph, but this paragraph is necessary for that one. So after all this hard work, in one week in Christmas time in 1965, I see how to bring all the pieces together. My insight, this is next three slides will be about how I understood the problem. There's a picture, again, of the behavior in the region near the critical point. I constructed, by mine's eye, a box. I'm going to think about this system as being divided into boxes, and I'm going to think that if a box is mostly green, like this one is mostly green, I'm going to color it green, and if it's mostly red, I'll color it red, and that's what I'm going to do on the next slide if it works. So here's the thing, and this box has more green than red, and so I color it green. Hey, it worked, thank goodness. OK, I color it green. This box has more red than green. It gets colored red, and so on through the whole thing. It's like something one might do on an iPad or something like that as the last piece coming in. So I spin it around, and we're all done. We have a description of the system in terms of boxes that are red and green, different description from the one at the beginning. And now I make the hypothesis, which is now called the universality hypothesis, that very much the same description works for the boxes as would have worked in the original description of the Ising model. This hypothesis is based on the experimental observation that different critical phenomena are rather similar. I went further than these experimental observations, and I assumed that they're really not just similar, but the same, so that exactly the same description could work for the boxes or for the original system, which is described on a much smaller scale. In this form, the description that I was constructing is one of scaling. My assumption was that you could make a change in the scale of the system, and that nothing much would change is one change the length scale from the smaller to the larger scale. So I started out with a small scale, which I call A. I moved to the large scale, which I'll call A prime, and A prime is proportional to A. There will be exactly nine equations in this talk, and here come five of them, and four more will come in a moment. So the new length, this new length, is proportional to the old length, and the new number of pieces in the system is for the geometry of two dimensions proportional to the old number of pieces divided by the size of the individual piece squared, and then we have a new temperature and a new pressure. That assumption that we could describe the system by a new temperature difference from the critical point and a new pressure difference from the critical point related to the old temperature difference and the old pressure difference is the critical assumption for doing this. So there's the mathematics of what I was doing. We have a new number of boxes. We have a new temperature deviation from the critical point. We have a new pressure. These things are proportional with powers. Here's the second power. Here's some power, which is the wife power of L, the z-th power of L. These things are proportional with some powers of the change in length scale to the old descriptions in terms of the number of boxes, the temperature deviation from criticality, the pressure deviation from criticality. When these assumptions were phrased in mathematical form, they proved to be remarkably powerful. They supported the previous important work on this phase transition problem and suggested the results among relating different experiments. This transformation provides a description of the effect of scale changes in the Ising model using in terms of these powers, y and z. In fact, all the important properties of the phase transition can be expressed in terms of these two numbers. The approach that I had developed was what is called a phenomenological theory. That means that it's incomplete. It is incomplete because I could not see how to predict the values of y and z and could not suggest other detailed things that we wanted to know about the phase transition. The result was based upon two ideas. One was scaling. The other was universality, and the universality, I remind you, is that all of these phase transition problems or large classes of these phase transition problems are really the same. So I gave seminars about the work. As distinguished from other people who said that they had to struggle to get their stuff established, I found it very easy. Everyone seemed to love the work. Everyone encouraged me. However, somewhat incongruously, for six years, nobody put the finishing touches on what had been done. Nobody knew how to calculate these indices called y and z. Nobody could go really beyond what I had done. Until, in 1971, Kenneth G. Wilson who had been a colleague of mine at Harvard University showed how to marry this analysis with the earlier work of Murray Gelmont and Francis Lowe and thereby produce a complete theory. Wilson called the theory, using the name from the earlier group, the renormalization group theory. Given the fullness of time in 1971 was a ways back, as you can appreciate, we can appreciate the power and impact of the ideas that came out of this work. I'm going to be talking about those ideas in a sec. New ideas. Scaling. In physics, we're most often concerned with connecting problems at different length scales. For example, we start out with atomic forces and then try to predict the properties of solids. We need some method for describing how to extrapolate effects over many orders of magnitude in length. How does what happen on this scale affect what happens on that scale? The word for this kind of extrapolation is scaling. As one transverses the different length scales, one only retains a few characteristics of the original problem. Not all of the details of the things that happened at the microscopic, at the small level, are retained. Some of them are basically the symmetries that were existed on the microscopic level and maintained, but not any of the detailed behavior. So you end up with a situation in which the result depends mostly upon the symmetries of the original problem and hardly at all upon the nature of the forces at small scales. This situation in which there are many, many, many different large-scale behaviors that are produced, I'm sorry, let me go back. The situation in which there is one set of large-scale behaviors produced by many, many different small-scale behaviors goes under the word universality. People now talk about universality classes. Classes of problems which have the same large-scale behavior. Problems fall into a few different universality classes depending on the nature of their solution. Scientists now use this phrase to describe many things beyond phase transitions. I do expect one of these days to hear universality classes used to describe the differences between different kinds of pop music. Last but not, last in this list is renormalization. Wilson's renormalization method ties all of this together. He described the effect of many renormalizations in which the length scale was changed again and again as a motion of interactions towards a fixed point, a state of constant and unchanging interaction. However, there is one thing more important than all of the things on the previous slide. We have a new calculational paradigm. Previously, the do theoretical physics was started with a problem. That is a description of the interactions among the particles in the system. The job of the theorist was to calculate a solution, in quotation marks, a solution, meaning a more detailed description of what happens in the problem. In the new era, one does something different. One does renormalization calculations in which one starts out with one set of interactions and constructs another equivalent problem on another length scale, as I did with the boxes. A partial but often sufficient description of the behavior of the whole thing is enclosed in the relation between the different scales. Let me give you an example. In the old days, one described the subject called quantum electrodynamics by giving the value of a parameter called alpha which had this value, one over 137. Now, one says there's a distance scale changes, for example as it gets smaller, alpha changes. Alpha gets larger and eventually the electromagnetic interaction can get very strong. In informal language, one says that the interaction strength runs. And this running of the interaction strength is, for example, the basis for the people have for their understanding of what is likely to happen at the Large Hadron Collider in Geneva. The practice of physics has changed and moved away from the moment of calculation of Newton, Boltzmann, Einstein, and Dirac, going from solving problems to discussing the relationship among problems. Of course, this change is not all my doing. In addition to the aforementioned people, like other people who could push to this direction, all the workers in the field determine the direction of the subject, but it has happened. There has been a manifest, large change in the way we do this kind of science. But let's go back to my personal story. So here I am in 1971. Wilson has just put the beautiful finishing touches upon the theory that I partially developed. So what's my initial reaction? Well, I think many people in this room can guess what my initial reaction was, but let me wait for a second and then put it up. I'm disappointed and angry. How could someone else finish up the description of my beautiful work? That's... blah, blah. But that view, if held too hard and strong, would paralyze me and prevent any future work. I concentrated on the fact that Wilson had quite impressive skills and knowledge. He could apply his thought processes in ways inspired by a computer programming. He knew about previous work in political physics and dynamics of which I was ignorant. He was, I could acknowledge, a very great physicist. I was gradually able to put aside my childish chagrin and take pleasure in the fact that with Wilson's editions, an edifice had been constructed which was transcendentally lovely and of first-rate importance for physics and for other kinds of intellectual processes. But, and here comes a big but, what next? It was evident to me that I had been wonderfully creative but also wonderfully lucky. It should have been and perhaps was evident that I would never do anything again that approached that level of creation. History suggests that theoretical physicists hit their top level of creation early in their lives and then never again approach the same pinnacle. I have a little table. Isaac Newton invented mechanics in the year 1667. Albert Einstein had a good year in 1905. P.A.M. Dirac invented the Dirac equation in 1928. Hans beta of the energy, figured out how energy came from the sun in 1936. Brian Josephson invented the Josephson effect in 1985. They were respectively 25, 26, 26, 30 and 25 years old. History suggests that theoretical physicists hit their top level of creation early in their lives and never again approach the same pinnacle. There are, I should be clear with you, exceptions.