 Okay, so now Professor Spradlin from Brown University will start his lecture series on physics and mathematics of scattering amplitudes. Thank you. Okay, so my lectures will be about scattering amplitudes, which is a subject that has received considerable interest in recent years. And from my perspective, this interest derives from at least three different motivations. One of them has to do with math, and it's the question to understand more deeply the mathematical structure underlying scattering amplitudes in quantum field theories, whether they be scattering amplitudes in particular very supersymmetric quantum field theories, or whether it be understanding and developing mathematical tools that can be applied generally to broad classes of quantum field theories. This ties into my second point, which is that a lot of the recent developments in scattering amplitudes, even though they may have initially been discovered in highly supersymmetric and non-physical theories, definitely have had significant impact on actual calculations vis-à-vis being able to develop new calculational tools and new ways of organizing calculations that are actually relevant for people who are working on the collider physics analysis at experiments such as the LHC. So here I'll put usefulness for real physics. Oh, and there's one more thing I meant to say on the math topic. From my perspective, I spent a lot of time working on the particularly simple, maximally supersymmetric Yang-Mills theory, and there one kind of mathematical question one might be interested in knowing the answer to is can one formulate a relatively simple set of mathematical criteria or mathematical properties to which n equals 4 super Yang-Mills theory is the unique answer. We know from a physics perspective that it's a rather unique theory, and we see that it has remarkable mathematical properties, so it would be interesting to understand if there's also a natural question here that it would be something that would appeal directly to mathematicians. And the third motivation, it's not quite on the same footing as the other two, it's still a little bit less mature than these, but I have to say something about ADS CFT, and I'll put a question mark here because it's long been a hope that developments in scattering amplitudes for the n equals 4 super Yang-Mills theory will have some interesting tie-in to our understanding of holography. Now there have been some interesting connections along these lines. People have used the known strong coupling behavior, or using ADS CFT that can tell you about the strong coupling behavior of various scattering amplitudes, and a lot of people who work on the other side, a lot of the people who work on calculating perturbative amplitudes in super Yang-Mills theory hope that one of the things they might hope is that by calculating the first few terms in the perturbative series of some function, you might be gifted with some insight that would let you guess, make an educated or inspired guess for the full non-perturbative completion. There certainly are examples of that that have happened in physics throughout the years. It hasn't yet happened, except in completely trivial cases, it hasn't yet happened in the study of scattering amplitudes, because the functions we deal with are really highly non-trivial functions of kinematic variables. Anyway, all that is by means of very brief motivation and introduction to this subject. The purpose of my talks will be to introduce to you a number of the main ideas, concepts, and methods that play a role in the modern literature, so that hopefully, after you've attended my lectures, you can jump right in and open up some relatively recent amplitude paper. There's no particular reference for my set of lectures, but I'll be giving references along the way. Today, I'm going to start with some very basic ideas, and this follows some material that you'll find in a review by Lance Dixon, 1308, 1697. But again, that's only today's lecture. In subsequent lectures, I'll develop more advanced material and I'll give more references at the time. Okay, so let's start with something very simple. Let's start with the Feynman rules for gauge theory. Okay, so you can open up your favorite textbook on quantum field theory and go usually to the appendix and look these up. In the standard quantization of gauge theories, there's a cubic coupling and a quartet coupling, but even more basic than that. If you have an n-particle scattering amplitude, it's characterized or parametrized by a collection of data. So for each particle, there is a four momentum. So I'm going to let the index i run from one to n. It's going to denote the n particles that are participating in my scattering process. So the idea here is each external particle is labeled by a collection of data that consists of a four momentum P mu, a polarization tensor, epsilon mu. And then also, let me specify now that I'll talk at least for the moment about pure gauge theory. That means I'm only talking about gluons. Okay, in this case, all the particles are in the adjoint representation of whatever gauge group you've chosen. So each external particle also is specified by some label A, A sub i. This specifies the color of the gluon. So to be specific, if we were working with gauge group SUN, then each of these indices takes values between 1 and n squared minus 1. Okay, so if you open your favorite textbook and turn to the chapter on computing gluon scattering amplitudes, first of all, the way the problem is set up is that to calculate an n particle scattering amplitude, you should specify this much data or this much information for each of the n particles. So the first thing we're going to do in these lectures is immediately dispense with almost all of this information, at least all of it to the extent that this is a highly redundant collection of information. So we want to repackage this information in a more succinct way. So I'll just say here, this is highly redundant, and I'll remind you why. It's highly redundant, first of all, because each four momentum has to be null because gluons are massless particles on shell condition for massless gluons. And then we also have redundancy due to gauge transformations. They tell us that we need epsilon, the polarization tensor for each particle needs to be orthogonal to the, so this is for all i, and this is also for all i, I'll put a comma there. The polarization tensor should be orthogonal to the momentum, and also epsilon mu i is equivalent to epsilon mu i plus alpha p mu i for any alpha. This is the residual gauge transformation. Sorry, I didn't leave myself enough room to write it there. I'll write it up here due to residual gauge transformations. Okay, so turning back to my point here, I want to text books to instruct you to provide this collection of information for each of your n particles, but this is a highly redundant collection of information because of all of these constraints. So the first thing we're going to do is try to repackage this information. The first thing we'll do is to repackage the color indices. So let me put step one, repackage color. Okay, so the idea here is rather simple. If we have generators, let's let ta be the generators of the gauge group. And in our field, it's often conventional to use a non-conventional normalization that has the disadvantage of putting some square roots of two in places where you don't want them. But I'll just show you what we normally do. We'll normalize them to trace ta, tb being delta, chronic or delta of ab. Usually there's a two there. And they satisfy the commutation relations, ta with tb is i square root of two. Usually that square root of two is not there, fabc, tc, where these are the structure constants of the gauge group. So we're turning back here. This index means that for each of your scattering particles, you are supposed to tell me which generator, which element of the Lie algebra in the adjoint representation. In other words, which ta is the one corresponding to that particle. Okay, so what we're going to do is note, I'm going to define f tilde just to absorb the horrible factor of square root two that I've inserted there. Trace ta, tb, tc, minus, this is sort of the main formula. So I'm happy if it extends over two blackboards because I'll be using it all the time. So what we're going to do now is we're going to use this formula to replace fabc, or equivalently, because it's just an overall factor, f tilde, in every Feynman diagram. Let me show you what I mean, okay? I'll show you what I mean. If you look up the Feynman rules for gluons, as I mentioned before, there's a three-point coupling that's proportional to fabc. Of course, it also depends on the coupling constant. I don't need that right now. Also here, there's a four-point coupling, cd. This has momentum dependence, but the dependence I'm interested in for the moment, I'm interested in the dependence on the gauge group indices. So this will have terms like fabfcde, with, of course, an implicit summation over e. And now right here, plus permutations. And these permutations are dressed, again, I'm suppressing, you know, there's some momentum dependence here. I'm just putting this note here to be honest. The point I'm interested in is these fabc's. Okay. So now, let's imagine you are calculating any Feynman diagram, like, for example, suppose you calculate this one. This will be a, oops, that's too many. That's still too many. Got a little overwhelmed there. Okay, this is a five-point amplitude. Well, this isn't even an amplitude yet, because it's just a single Feynman diagram. Okay, suppose you calculate this five-point Feynman diagram. It will be cubic in the fabc's, because there's one structure constant f sitting here in this vertex, and there are two sitting there in that vertex. So what we're going to do is replace each one by star, and by star, I mean this equation here, to get a polynomial in this example, a cubic polynomial in more complicated examples. It could be an arbitrarily large polynomial in traces. I'm not going to work out this example explicitly because the general point I'm going to make is extremely simple. So consider a term, okay, so the point is, okay, let me put the point first. The point is that all of the adjoint indices appear either once for an external line or exactly twice for an internal line. Okay, so that's an obvious point. If I were to label this, let's call this a, b, c, d, e. Then I need some adjoint labels on my interior line, so let me use the next letter. Let's call this f. Well, f is bad because you confuse it with a structure constant, so g. Well, that's confusing. You can confuse it with the gauge coupling, okay, r. Okay. But this one will be tied together with that one. So the point is, this thing is a cubic polynomial in the traces. It's of the form trace minus trace times trace minus trace times trace minus trace. So when you multiply it out, you're going to get a polynomial with eight terms, each one of which is trace times trace times trace. Let's not write out all the indices, but consider some random term in that sum. Five of those, you've got all these t's here. Five of those t's will have indices corresponding to the external lines. But then there will be t's that are connected by an r index. For example, suppose you have something like this, r, r. And I'm suppressing other indices for now. Okay. Then all we need to do in such a term is to insert the completeness relation. And it's at this point that I'm going to specialize, so everything I've said so far, except for this minor comment, everything was quite general for arbitrary gauge groups. But now I'm going to specialize to SUN because of something particularly nice that happens. So specializing to SUN, we use the completeness relation. And here I need to introduce some, here for once I'll write out this sum explicitly. Usually whenever you have a repeated index, of course, it's implicitly understood that there's a summation. I'll need to introduce some notation for the row and column indices of my generators. So recall that for SUN, your generators are traceless Hermitian matrices. So it's somewhat conventional to use Latin indices for the rows and put a Latin index with a bar over it for the columns, to remind you that this is a Hermitian matrix. All right, the completeness relation tells us that that sum is delta I1J2 bar, delta I2J1 bar. That would be the end of the story if we were in SUN. But if you, I'm sorry, if you were in UN, but since we're in SUN, we need to subtract off the diagonal. So now going back over here, this is, this is what I was trying to get to. The point is that every time you see a contracted adjoint index in something like this, you're going to insert the completeness relation. So if there are any questions, you should let me know. I'm sorry. It's, wow, it's hard to read all this. Yeah, there should be another P because let's see there are 1, 2, 3, 4, 5, 6, 7, 8, 9 T's. In general, I have in mind an arbitrary long polynomial of these with many, many contracted indices. And we're going to insert the completeness relation for each one. So for the quartet coupling has built into it, one repeated index. So there's a repeated index R and then there's secretly inside my quartet coupling, there's another repeated index like Q or something. But the important point is there's never a repeated index inside the same trace. It's always an index in one trace contracted somewhere else in a different trace. All right, so what is this relation by you? Well, it's going to tie together the traces. Let's say you have T. Okay, here I want to be a little bit schematic. What does it mean? Hold on just a second. Hold on till I finish writing this out. And then you'll see what I'm doing. I'm writing how the indices contract inside this. So if you were to write out all of the row and column indices on these various matrices, what does it mean to take a product of matrices and then take their trace? Well, to take a product of matrices means you sum this column index against that row index, that column index against that row index, that column against that row, etc. And then to take the trace, you just sum the final column index with the initial row index. So that's what that means quite literally. And over here, we have a similar story violated my convention. Okay. Oh, in fact, I'm sorry, I see I violated my convention compared to Yeah, well, it's too late now. I apologize. Oh, no, no, I'm okay. Sorry, temporary confusion. This index contracts with that, this index contracts with that, this index contracts with that, and that goes back there. Okay, so that's this piece. Okay. Now, what I'm going to do is insert the completeness relation between this TA and this TA. And we see here, in the first term, what does that do? That's going to tie together the row index on the first TA with the column index of the second TA and vice versa. So that's going to connect. On the blackboard, I have the luxury of just erasing it. I'm afraid those of you taking notes will find this more difficult. But remember, I'm going to connect the row index on this TA to the column index over here. So this line, I think I have some colored chalk. There was a TA here. There was a TA here. So this this one connected over here. I'm going to connect it directly up on here. And this one here, it's connected over here. Oops. Sorry, sorry, sorry. This one here, it's connected up there. Okay, so I've made a mess of things. But let me summarize here. The first term here joins together who traces. Let me write a specific example. In case that was kind of a jumble. If you have two traces here, and they have a common TA in them that you're summing over, the completeness relation, the first term in the completeness relation, lets you simply join the traces. So you take what's in here and you insert it into that trace. Plus order one over n. The one over n corrections, it's well in this case, it's rather simple. It just means you delete this TA and you delete that TA. So you have a two traces, a product of two traces. Continuing in this way, you can use the completeness relation, all contracted indices, until you are left with a single trace, let me just write it like this, a single trace plus order one over n, where all of these are now indices associated to the external particles, external labels. So just by systematically cutting open the trace and using the completeness relation, you can break everything down into terms like this, plus order one over n. Now, when you do this, you can get all possible permutations of your original labels. So again, going back to the five point example, which I've just deleted, if you had the five external particles, A, B, C, D, E, you could get all possible permutations in here, you could get trace T, when you multiply the polynomial of T's out, you could in principle get TA, TB, TC, TD, TE, you could get TA, TB, TD, TC, TE, etc. You'd get all permutations. So in general, the final amplitude takes the form, like this, where the sum is over all cyclically distinct permutations. I'll explain what that is in a moment of the external labels. Okay, so this word cyclically distinct, it's really simple, it just means the following. It's unnecessary to sum over all possible permutations in here, because the trace of a product of matrices is automatically invariant under cyclic transformations. It would be redundant to include TA, BT, CT, TD, TE, because this is identically equivalent to TB, TC, TD, TE, TA. Okay, so you only need to sum here over all permutations that are in equivalent with each other, even taking into account the cyclic symmetry. Yes, capital A is all the stuff that's left over from the momentum dependence, right? Here I was only focusing on the FABC structure of the Feynman diagram. But each Feynman diagram also comes with all of its propagators that are multiplied together. So this includes all the rest of the amplitude, so propagators, polarizations. So I started this exercise and I said, let's look at the Feynman diagrams, but let's only focus for the moment on the FABC part. If you were really calculating this Feynman diagram, you would have to write down all the FABCs, all the propagators, all the vertices. Then you would do this exercise on all the FABC components, but you'd still be carrying around all that extra stuff. And that goes into what I've called A here. I'm assuming large n to be large, yes. Yes. And small n doesn't need to be large. So let me make that explicitly. Again, this is plus order one over n. So the point is, and actually order one over n vanishes at tree level, right? So once you go to loop level, you can have contributions that are the product of two traces. When you go to two loops, you can get products of three traces, et cetera, et cetera. So this capital A, or this script A, in fact, I've committed an error here. Let me call this just capital A to distinguish it from what I called script A there. This is usually called the color ordered sub amplitude or partial amplitude. So in this manner, you can, at least for gauge group SUN, you can very easily dispense with all of the gauge group information right from the beginning. It's relatively easy to compute directly using what we call color stripped Feynman diagrams. According to these Feynman diagrams, the three point vertex is just given a i or i times g a mils. And the four point vertex, oh, sorry, sorry, sorry, that's incorrect. p, q, k. And if you have a four point vertex, carrying momenta p, q, l, and k, this is i g a mils squared over square root two. Here's that funny square root two that I was trying to avoid earlier. I also need a p here. This one is, oh, sorry, there's a square root two there and a square two squared there. This one is eta mu rho p minus q. That's permutations. Thank you. I'm sorry. Yeah. Yeah, hold on just a second. Let me, I need to put, I need to put indices on here. I guess this would be mu epsilon mu would be my, I need to specify the illicit vectors here, epsilon mu, epsilon rho. And here, let me delete this. Yeah, there's no change in momentum conservation. Was that the question? So the point is just the following. And let me write here, similar for the four point vertex. The point is just the following. The point is that there's no more, no more FABC. It's just like the usual three point vertex except there's no more FABC. And the point is using color stripped, and here's the important point, planar Feynman diagrams. So, so the point is to calculate this object, you in principle have to calculate all Feynman diagrams. But you find that it can be written as a sum over certain permutations of a more primitive object called the color ordered amplitude. To calculate the color ordered amplitude, you only need to sum over, over planar Feynman diagrams. So what you do in practice is you specify some, the planar ordering of your external particles, one, two, three, up to n. And then you compute this, this is equal to capital A of one through n. I call it planar because I literally mean that the Feynman diagrams that you should include here are ones that can be drawn on a plane. So for example, yes? Oh, no, no, no, I'm sorry, I'm sorry. I'm not necessarily considering only, this is a general decomposition. Yeah, the comment that I already made this comment about tree level amplitude, there are no one over n corrections. But the really important point about these color ordered sub amplitudes is not necessarily that they're easier to compute. Because even though I've based my lecture so far on the crutch of Feynman diagrams, of course, we don't like to actually use Feynman diagrams to actually compute things. So the important thing about these capital A's, the important property of the A's is that they only have singularities in channels corresponding to a collection of cyclically adjacent momenta going on shell. And here's where here's where I'm going to amplify the comment that you made. It's important, you're right that at tree level, every diagram can be made planar. But I mean very specifically that it's planar with respect to a chosen ordering. So let me give you a very specific example of what I mean. And this example will also highlight the importance of this comment that I've written on board. So for example, a 12345 can have singularities. S12 equals zero, S23 equals zero, I haven't defined S in a minute, just give me a second. S45 equals zero, S51 equals zero, where Sij is Pi plus PJ quantity squared. For example, here's an example. Okay, here's an example. The circle here means nothing. The circle just means that to compute the full amplitude you would sum over all Feynman diagrams with these given external legs. What I've drawn inside the circle is one particular example of a contributing Feynman diagram. And this particular Feynman diagram, here's an example of a Feynman diagram contributing to a 12345 that exhibits a singularity S12 equals zero. You see because of this propagator right here, that propagator is literally one over S12. Yes, yes. Well, when you're talking about tree level amplitudes, of course, they're poles. When you're talking about loop level amplitudes, they could be branch cuts. So I intend it to mean both of those things. On the other hand, I want to contrast that with something like S13. In contrast, a 12345 can never have a singularity S13 squared equal, sorry, S13 not squared, the squared is built into the definition of Sij. You can never have a singularity at S13 equals zero, or S14 equals zero, or already, you know, S25 equals zero, etc. Because you can't draw any planar Feynman diagram with this particular ordering that exhibits the propagator S13. So now, of course, if I did something like this, here's an example of a Feynman diagram that has a one over S13 pole from that propagator. And the gentleman who commented earlier was exactly correct that you can take this Feynman diagram and it is a planar Feynman diagram. But it's not planar with respect to the ordering of my external particles has been violated. This goes 13245. So this Feynman diagram contributes a 13245 not to a 12345. So once again, the full amplitude is given by a sum over all these permutations. But you only need to do one calculation. You only need to calculate this piece. And then you know everything else because everything else can be obtained by permutations. So you only need to fix the cyclic ordering of your external particles, do all planar Feynman diagrams with that particular ordering and sum them up. And that gives this partial amplitude or sub amplitude that has this important property that the singularities always correspond to cyclically adjacent momenta going on shell. So this concludes my discussion of how to eliminate or repackage all of the color information. So henceforth, I will always be talking about color ordered partial amplitudes. Are there any other questions at this point? Okay. So we have just enough time for me to explain how to trivialize the next piece of information, which is the kinematic information that specifies the momenta of the scattering particles. So this is called step two, helicity management. So we're going to let sigma mu be the Pali matrices. The sigma zero is the identity matrix two by two. And sigma I for I equals one to three are the Pali matrices. Then if we have if we have P mu new, sorry, P mu four vector, we can contract it. And this should have indices A, A dot. A, A dot run from one to two. So the mu index here runs over the four different matrices. The first one is the two by two identity matrix and the other three of the Pali matrices. And the A and A dot here are the row indices and column indices of those two by two matrices. So using these Pali matrices plus the identity, you can trade back and forth between something with a vector index, like a four momentum, and something that's expressed as a two by two matrix. So this is now a two by two matrix, where here we have the same information expressed as a four vector. So the point is the following that P squared, the four vector P satisfies P squared equals to zero with respect to the Minkowski metric, if and only if the two by two matrix PAA dot satisfies determinant PAA dot equals zero. And you can quickly see that that's true, because PAA dot, if you write it out in terms of components, it's P naught plus P three, P naught minus P three, P one plus IP two, P one minus IP two. And if you calculate the determinant of this, the determinant of this is exactly the Minkowski norm of P. Now if you have a two by two matrix, if you have a two by two matrix P with determinant of it being zero, then it has rank one or at most one. In fact, it should have rank exactly equal to one, unless you're talking about the the vector that's identically zero. So what that means, it can be written as an outer product of two two component objects. How do you know that's true? If the determinant of this two by two matrix is zero, it must have a non-trivial kernel, and it must have a non-trivial co-kernel. So you just let lambda tilde be, you let lambda tilde be any non-zero vector that lives in the kernel and let lambda be any non-zero two component vector that lives in the co-kernel and then up to an overall normalization, this has to be true, and you can pull the overall normalization into your lambda and lambda tilde. In other words, if you look at a matrix like this, the determinant of this is automatically zero, lambda and lambda tilde. So what these variables do here is, so what I'm trying to say is this is the most generic way to parameterize a two by two matrix who's determinant is zero. Or going back to physics for a second, this is the most way most general way to parameterize a four component null vector in Minkowski space. So this is really neat because we had here what started off life as a quadratic constraint. And it's rather inconvenient to have to work with variables that satisfy a quadratic constraint. But by using the lambdas and the lambda tildes, we have completely unconstrained variables. Now I need to introduce a little bit of notation with these lambdas. And then I think I'll have to call it quits. So we often denote, remember, okay, there will be a pair lambda lambda tilde for each p in a scattering process. So they get indices i ranging from one to n. So p i, p i, a dot is p i, mu, sigma mu, a dot. And this is equal to lambda i, a lambda i tilde, a dot. Okay, so all I'm doing is I'm adding a second, I'm adding an additional index i to keep track of the particle labels. So then here finally, I'll end this lecture by summarizing the notation. We sometimes denote, we're going to use square brackets to denote the tilde variables and angle brackets to denote the untilded variables. And now I'm going to define an inner product. We define ij to be epsilon ab lambda ia lambda jb. ij is epsilon a dot b dot lambda tilde ia dot lambda tilde jb dot. And then just by way of notation, sometimes you'll see, oh, okay, I should work out sij, which means p i plus p j quantity squared. Since these are null momenta, that works out to two p i dot p j, since the p's are null. And if you work out what this is, this is ij times ij. And I think I'll conclude my first lecture there. So we'll pick up next time tomorrow. Thank you.