 And so indeed, the next couple of days, I want to talk to you about a topic that's not quite familiar, probably to most of you. But hopefully by the end of it, you will see why it's interesting and how to tackle it. So primarily, I want to talk to you about the general quantum field theory formalism for analyzing situations that are out of equilibrium, generic time dependence in quantum field theories, and develop the machinery to ask questions and probe how your quantum system is evolving as it's driven out of equilibrium by external sources. And you can find many applications of this in a wide variety of physical circumstances. So pretty much all of response theory where you sort of come into your quantum system, perturb it by some sources, and then ask how it evolves in the future is part of this. A natural place this again appears is cosmology, where you are interested in dynamical evolution of quantum fields if you are looking just in, say, a fixed cosmological background quantum field in code space time. But if you're interested in questions where even gravity is dynamical, then it's really a question of dynamical quantum gravity in code space time. But perhaps the context where I got most interested in this comes from understanding black hole dynamics and its connection in the context of holography. I won't touch upon any of these particularities in any detail except in the, perhaps, the third lecture, where I will draw out some connections with ADS-CST. But much of my motivation and interest in this subject comes from the third aspect. So that said, this subject is reasonably old. It dates back, what, five decades at least, if not more. And pretty much all that needs to be said in terms of quantum field theory formalism is contained in seminal papers written in the 50s and 60s. The thing that's not completely clear is the following question, which is given some microscopic rules where you know how to deal with time dependence. I'll say, well understood, and I'll put this in quotes because this is not textbook material, though it should be. There's an open question, which is how do you take these microscopic rules and convert them into a framework where you can do effective field theory? And the reason this is more tricky than it appears at first sight is simply the fact that in the effective field theory, we know from phenomenological grounds that there's a lot of interesting phenomena which appear at microscopic scales that are not necessarily there in the microscopic theory. The microscopic theory is unitary. It just has time-dependent sources that doesn't spoil unitarity. But the effective field theory exhibits features like dissipation, which at first sight seems like it's in tension with unitarity. Something seems to be getting lost. If you ask these questions in the context of cosmology or black holes, you can start wondering about unitarity and wondering about questions like whether there's some intrinsic decoherence in quantum systems because of some effective classicality. So all these questions should be well posed and should be derivable from these microscopic rules. And the part that has not been made very precise, although the tools, as I will try to argue for you, are at our disposal, is fleshing out this arrow. In some sense, you can take this program of what I'm going to describe to you in the next couple of lectures as a very simple question. About 40 years or so ago, Ken Wilson told us how to think about effective field theories. You start with some microscopic stuff to integrate out all the things you're not interested in because they don't contribute on the macroscopic scales. And you write down an effective field theory consistent with all symmetries, which is the theory of macroscopic scales. If you play that same game in context where there's non-trivial time dependence and the effective field theory exhibits dissipation, et cetera, you find that your naive writing down of Wilson and effective field theories is in tension with microscopic inequality. In particular, one of the things I'll try to emphasize is that it's very easy if you just blindly follow Wilsonian rules to get anti-dissipation, which you don't want. Friction means that things settle down. You cannot have runaway behavior at macroscopic timescales. The big question is, what are the rules of the effective field theory that ensures that what we get in the low energy theory knows about the microscopic and gives you the correct phenomenon logic? So I'm going to attack this question in a series of steps. I should say the story is not complete. It's we've understood lots of things. And the reason for the second half of the title is because that's a context where I think we have both evidence and hope for making most progress. So the general structure of the lecture is going to be that today, in the first lecture, I'm just going to talk about the microscopics and set up the quantum field theory formalism. I'll convince you that if you think for five minutes about what the Feynman path integral tells us, you'll quickly learn that what a standard quantum field theory textbook tells you how to do the Feynman path integral contour is highly insufficient for doing anything interesting in the out-of-equilibrium context. So we'll sort of look at what we need to do. In the second lecture, I'll try to derive constraints for effective field theory. Then I'll sort of take a detour and tell you about one particular effective field theory where I think we have some hope of understanding with the hydrodynamics. And so I'm going to just review, since I'm assuming that most of you are not familiar with this topic, I'm going to review hydrodynamics as an effective field theory. And then finally, I'm going to put together one and two to construct the hydrodynamic dissipative effective field theory. Now I should say that as I go along, I'll sort of start out slow, but I'll start getting reasonably involved somewhere in this bit. And the last bit will sort of bring in elements of topological field theory, which you might think has nothing to do with this topic, but I'll try to again convince you that it's an efficient way to understand what's going on. So that's the general plan. Are there any questions before I get started? Good. So let's get started with microscopic point of field theory for dealing without of equilibrium, the sort of rhetorical question to which the answer is clearly that we need something more than we usually do. So the question I want to ask is, what is the most general quantum mechanical path integral contour that one should consider if one is interested in computing all correlation functions with any given time order? So let me phrase this as follows. It should compute for us all correlation functions, arbitrary operator insertions. It should allow us to talk about all possible time orderings, not just scattering processes where you sort of set up some initial state, evolve it, and watch what happens in the final state. It should allow us to do something more. It should allow us to sort of ask questions when the underlying quantum state is not pure. It's in some density matrix. All of these are part and parcel of general quantum field theory. It's just that often when we sort of focus from a particle fix point of view and what we do in quantum field theory, we sort of focus very quickly on asking questions about the scattering processes. And so many of these things are sort of not considered for that particular reason. Insofar as just focusing on scattering, you may not need to be worried about these things. But these are part and parcel of the theory. So we should have a framework to understand them. And for most part, this question has been very clearly answered in the 50s and 60s. And the answer to these questions is a slight upgrade of that. But for reasons that are not entirely clear to me, have not been really considered for until, say, the last couple of years. So this one is a formalism that was developed by Schwinger and Keldisch. So I'm going to call it the Schwinger-Keldisch formalism. And this one, you can call generalized Schwinger-Keldisch. Although in the literature, you will see names like time faults, out-of-time order, K-val-abrivia to OTO, et cetera. Just to simplify my life, whenever I'm going to write formulae in a bit, I'm going to pretend I'm doing quantum mechanics. I'm not going to talk about the spatial insertions of any of the operators. I'm going to talk about correlation functions. But I'll just keep track of the temporal position of the operators. So if you want to put back the spatial positions, you can. Nothing will change. But it'll just save me some time writing formulae on the board. So what do we know? So in quantum mechanics, quantum evolution proceeds by taking a state, a unit reacting on that state. And often you just think of this unit as e to the minus i ht, the time independent Hamiltonian. This is sufficient if it is time dependent Hamiltonian. Then you do a path order exponential of the Hamiltonian integrated over time. This implies two things. This implies that density matrices already transform by conjugation because the density matrix isn't really a vector in a Hilbert space. It's more like an operator. It differs from an operator in a small detail that you'll see in a second. But you can think of rho as roughly psi psi with a ket and a bra. And then the transformation is obvious. Heisenberg operators, on the other hand, also transform by conjugation, except the conjugation is reversed because, again, this follows from the action of the operator on the state. But two crucial things, states evolve by single unitary. So as long as you're interested in asking how a state evolves, you just have to skeletonize in Feynman's language this unitary. So you do time steps. And if you're a stat make person, you would sort of take e to the minus i ht and decompose it into little time steps. And then each of them, you think of a matrix, exponentiate that, take that matrix, the nth power. And that would be what you would get for this total evolution. But now you have to do that for both the u acting on the left and the u dagger acting on the right. So that's the main difference. And if you think of this guy as a forward evolution, as one would do in a path integral contour, this guy seems to require one copy of the forward evolution and another copy of an evolution, but this one running backwards. So already, if you have a density matrix, the formalism of quantum field theory or quantum mechanics itself demands that you carry around with you two copies of your evolution operator. And representing the evolution operator in a path integral requires that you sort of put together two copies of those and then evaluate how rho evolves in time. Now let me ask one more question and then motivate the most general contour to draw. So statement number one is that, which applies to this context of Schwinger-Keldisch, is that density matrices require you to have a path integral contour that has two legs. One running forward in time, one running backwards in time. And that's the story you can sort of read. If you, for example, read this nice physics reports by Chauvetal from about 87, where they describe, so it's called equilibrium and non-equilibrium formalisms unified, and they describe in great detail how to apply the Keldisch formalism in that context. I'll tell you how to think about it in a second. But before I do that, I want to sort of talk about the more general context. Let me ask the following other question, which is, suppose you're interested in correlation functions of operators. I'll take them to Heisenberg. So they will be operated indexed by some time insertion. And for simplicity, I'll sort of index the operator itself by when it's inserted. And if you like, I can put another index here. So let's sort of think of OI as being inserted at time TI. And let's make the convention that we have n operators. And once and for all, fix the times where we sort of want to ask questions to be in this hierarchy. So you'll see why this is useful in a second. So I'm just picking n temporal instances and demanding that they are ordered in this way. This is no loss of generality. Now, I can ask questions about this correlator, which you all tell me is time-ordered. I can take any other correlator in this set. And in fact, there are lots of them, because there are n insertions, n operators. I can permute them however I want. And clearly, there are n factorial permutations because there are n objects. But of these, only one of them has the operators appearing sequentially in the order in which they're written, which is time-order. Everything else, by definition, is out of time-order. But the question I want to ask is, if you want to set up a formalism for computing them, what would you do? What kind of path integral control would you write? And you don't have to think very hard, because the answer is already in here. The answer is already in Heisenberg evolution, simply because if I want an operator at TI, I can write it as U dagger evolved from some fiducial time t0. And at that fiducial time, I can just go back to the Schrodinger picture and talk about the Schrodinger operator. So in other words, if I write down any of these correlators, I can fix a reference time t0. And imagine these operators as being sprinkled at different times with t1 appearing at the top. In this case, 1, 2, 3, 4, 5, 6. So see, t6 appearing down here and everything else in between. But if you think of how each of these unitarys are being represented by a leg of the path integral, you see what's going to happen. I need to go up. And my path integral contour winds up and down in time simply because of these unitarys. The unitarys are what you can sort of trotterize and write down in terms of the path integral. The path integral then requires me to sort of refer back to some fiducial time and write down a contour that switches back and forth in time. I'll explain what I did at the tops and bottoms. But for now, there's a flow of time on the flow of time because nothing is ordered, winds back and forth. In other words, if you look at these guys, there's a e to the plus iht in here. There's a e to the minus iht in here. Every piece of e to the plus iht goes back in time. And every piece of e to the minus iht goes forward in time. And so, but if you do the same evolution forward and back, you can cancel them. So I have sort of refrained from drawing this all the way down and then going back up simply because nothing happens here. Uu dagger is 1 by unitarity. So I can sort of erase this piece and go up. Upshot is that if you ever wanted to ask questions for all possible correlation functions, your only hope is to understand how to do path integrals of such contours. Now, for sort of semi-obvious reasons, these have not been ever considered in the literature before for the reason that forward and backward evolution is not something that makes, I mean, if you're trying to experimentally probe a system, you can't forward and backward evolve the system. You can't unevolve the system with the same Hamiltonian. Once it's gone somewhere, it's gone somewhere. But theoretically, there's nothing wrong with this. And one of the things that happened in the last couple of years, again, motivated holography in a very interesting development that I won't touch upon in any detail, is that people have realized that correlation functions and observables of this kind have lots of details of how systems approach thermal equilibrium. So the genesis of this discussion was at least the first understanding of this structure was in the context of black holes and how black holes thermalize. But trying to understand that in a sort of much more systematic way, at least in the context of these quantum mechanical models called the Sachs-Devier-Kittayer models on its cousins, has led people to sort of ask questions about these contours. And so these contours are something that people are actively studying and correlation functions of this kind, not all of them. But what I'm going to tell you is the framework to do all of them in one go. Because once we can sort of generalize once, you can just do it all the way. OK, so I'll come back to that and I'll tell you why. Why that's the case. So it's a question of what you want to do. So yes, the answer to your question is yes. But there's a reason why one does that usually and why this last piece is there. And that I will describe once I tell you what the full story is. I just wanted to motivate what the discussion is. Any other questions? So then let's, so I hope it's clear that this is the sort of structure of interest. So now let's just try to put some words to put some structure to this and ask how to set up the quantum field theory path integral in this general context. So I have to give some names and give you some definitions. And then we are in business. So let me define a k-fold out-of-time order contour to be 1 where I'm going to start. I'm going to sort of invert that, rotate that picture by 90 degrees for simplicity, write down a path integral contour with 2k legs, k of them going forward, k of them going back. Time runs this way. This is some reference time t0. t1 is up here, tn is somewhere here because of my ordering. Now let's answer the question that I was just asked. In some sense, you could say, well, take your initial state, evolve it back and forth till wherever you want, and then stop wherever you end up at the end. But now think about it the following way. If you sort of have started with some initial state and evolved back and forth, and you ended up at the last point where you can put in the last operator in your sequence, there's a sequence of evolutions, and you don't quite know what the quantum state has done. In particular, if these unitary themselves are time dependent Hamiltonians, you don't know where the state has gone to after some point. Now usually the way one deals with it, it says, oh, at least in scattering theories, one says, look, your state starts out, it evolves, and the evolution is mostly adiabatic. The out state is some phase-rotated version of the in state. If it's phase-rotated, at least then you know how to deal with it. But in general, if the evolution is not adiabatic, if there's violent time evolution, then the final state, whatever state you end up with, has some rotation, but you don't know where it is. The simplest thing to do is to sort of revert back to the initial state and ask questions with respect to the initial state, because the initial state is something you control. So when I do this kind of contours, I will always revert back to the initial state simply because that initial state is where I started with. I know exactly how I prepared it. So going back there, I have all the information I want. So in the question that was asked, I could put the operator's Tn, I could look at the time-ordered guy, up to T1, but I don't know what the state is at times T greater than T1. I mean that would be an in-out correlator. If I knew what the out state was after evolution up from my initial state to T0 to some little bit of time after T1, I can compute, so this would compute in state, in state, sorry, some out state. What we have at our disposal always is full knowledge of the in-state, so I could actually compute this guy without worrying about what happened later. And the way to do this is to basically evolve the system little bit past the latest time of interest and then just follow yourself back with a trivial unitary back to the initial state. You can think of this as follows, you can think of this piece as simply the identity operator inserted there, basically saying that you just reflect back whatever is happening here back to the past and then evolve back to the past. So I'll write this in a nicer fashion in a second, but that's basically it. And in this context, you sort of have to do this at every junction. And so the contours I'm going to consider are going to be having forwards and backwards legs with junctions which are insertion of the identity matrix at the future and in this case also the past. Now these contours where you sort of compute time-ordered correlators are Schwinger-Kellish where K equals one, so I have two legs, one going forward, one going back and they're sort of glued together at some future point, but they're part of a much bigger ensemble of objects which where you can sort of go back and forth as many number of times as you like. So let's write out the, convert these objects into generating functions. So let me first do it for K equals one. Given a contour, all I wanted to do was compute correlation functions, right? So what I would do to compute correlation functions is I would sort of turn on sources, put on sources everywhere. Once I put on sources, I can evaluate the path integral as a functional of the sources and then differentiate. The new thing you have is because you have two evolution pieces, you can put on sources both on the top and on the bottom, okay? So let me write it as follows. So I'm going to write this as, I take rho as I had there. I'm going to evolve it forward with sources J right. So I'm going to call this the right contour and then I evolve back with some sources which are a priori different, so let's call them J left. This takes rho into some version of itself after evolution but it has come back and this piece here which is the identity matrix is nothing but taking the trace of this matrix. So in other words, the reason I put an identity that is simply saying that you evolve rho till it's latest time and then you trace out both its legs. By definition is what you would see for example in here, they won't use all these words but that's the general structure of the Schringer-Keldisch contour. We want something more. What we have here, I have to give you some labels. So let's put labels here. So let me call this one right and this one left so that the top and bottom are sandwiched as right and left. Then no, no, no, no, no, no, no. The in-state information is in rho. So sorry, yes, rho contains the in-state information and I'm doing nothing. There's no trace here. The traces are the far future. The easy way to think about it is to unwrap this and then see what happens as you evolve rho to one side, evolve rho to the other side and then you trace the legs. So here I can say the same words except that I have to just give you some labels. So for, inspired by those labels, it's useful to write down labels of the following kind. Let's write down by convention right for something that moves right and left for something that moves left in this picture and then put the ones at the top and bottom and next two, three, and so on inside. So it goes up to K, K somewhere in the middle and then there's three just up here. And you see that first of all, the one key difference is that the odd contours have right followed by left. The even numbered contours have left followed by right. One right precedes one left but two left precedes two right but they're just a non-function of my labeling. I can call this collectively alpha right where alpha runs from one to K and alpha left where alpha, same alpha and I can write down a generating function which is a functional of J alpha right, J alpha left which is now a trace of rho sandwiched between sequence of unit trace let me write it like this. One right, one left dagger but then after I do one right, I do two left, dagger and then keep going till I'm done up to K and then trace. So this trace here now refers to the fact that there's a trace at the K level somewhere deep down in this nesting. This is something that we should have all I think done long time ago but there you are. So these are the most general contours of interest and as you can appreciate this one has been very much studied because it allows you to compute response. What one is usually interested in is you sort of take some system prepared in some initial state, ask questions of how things are either time ordered or anti-time ordered in this case. You prepare a system, you sort of disturb it, ask what happens to it, what is the response of the system and so if you know what kubo formulae are as I'll describe to you later then kubo formulae are computed by are contained within this formalism. But if on more detailed questions you don't have a choice but to consider these more general contours. We're not changing the rules of quantum field theory we are sort of just asking the most general question and therefore have been forced to generalize our notion of what path integral contours we would study. Well so here I assume that was the case. If you want to compute it when the incoming and outgoing correlators are different states are different you just happen to know what the state is. Then the problem is very similar to this because you sort of evolve with all the orderings so you'd still have something like this and then you just project it onto your given out state. So the problem with this is that it presupposes you know what the out state is. So this sandwiching is just effectively a projection because you have a state here and then you just ask how much of it has an overlap with your particular state and that you can do anyway. So even if I put these operators out of time order the contour would still be the same because that came from just the Heisenberg evolution of the operators. The rest of the details are consented in here. The J's by definition can have time dependence, yes. So I mean that's one of the reasons I generalize the contour, yeah. So you could turn on time dependent sources because all that would do for you is this U would be something like e to the minus ih integral of h which is functional of J dt and then there's some time ordering inherent in front. So for fun let me tell you some useful facts because people like to sort of, I mean, contours are all nice but at the end of the day you want to sort of distill them into something that's useful in terms of observables and it's nice to ask what are the different bases of observables that you can have and for different problems different bases are useful and in literature you will find lots of them so I'm just gonna tell you some useful facts about those contours that are useful for comparing with literature. So let's just talk about what kind of correlate, what are the classes of correlators one can consider. The most basic object I'm going to call white man bases because these are really like the white man correlators computed in Euclidean field theory and analytical and continued and let's call them G of sigma where sigma I'll take to be an element of the permutation group SN. So N objects then index one to N, the N permutation, I can permute all of them and the symmetric group is a set group of all permutations. So if I want to N, you can write down N factorial permutations which are one to one correspondence with generators of SN. So G sigma is going to be something like operators, permutation of one, permutation of two, dot, dot, dot, permutation of N and just to be clear, I'm going to do one small notational change. I'm going to put hats on these operators to denote the fact that these are the basic operators of the theory. They don't live on the contour, they're just in one copy of my theory. I have N operators acting on some Hilbert space. So O hat acts on some Hilbert space H and I can take the operators, take their time insertions, permute them and have N factorial such correlators which I'm encoding into this basic set. Now this K-OTO contour can be thought of many different ways. One useful way to think about it is to think of attaching a copy of the Hilbert space to every leg. Think of all the right, in the right pieces of evolution, it's like a state evolving, it's moving forward in time. So I can associate to every right leg a copy of the Hilbert space indexed by this alpha. For every left, it's like the conjugate state because it's evolving backwards in time. If just the fact that if psi evolves forward that the corresponding bra evolves backwards. So I can say that the K-OTO contour should really be thought of as living on this Hilbert space and I sort of implicitly done that by allowing myself to turn on operators indexed by alpha. You can think of the sources J alpha R and J alpha L as acting on states that live on these guys. So in other words, there are operators here which are also indexed by alpha and right, left labels. And this is the reason I put the hats up here to distinguish them from operators that live on this extended Hilbert space. Of course, this is just bookkeeping. This is not something that I'm doing which is different but it's a useful way to think about it because then it gives you a mnemonic to sort of map these guys back onto here. So essentially all I need to give you is some kind of rules to take these correlators involving this alpha and R labels back to the hatted operators. So let's call these single copy theory because that's really the physical theory and physical correlators we're interested in and call these the OTO operators. So what can I do in terms of these OTO operators? Well, I can put, I have N operators if I'm computing an endpoint function. You know, I can put them wherever I want. TN was here, P1 was here. I have 2K horizontal segments. Each operator can go into any of the 2K segments. So given this contour, I can place my operators where I want and the left right correlators are simply, they still have the sigma label. Let's not put the sigma label just yet. And I'll decide by convention a time ordering which simply follows the flow of time along the contour. So there's a contour ordering because there's an arrow along the contour. And I will say along the contour an operator that's placed here comes before an operator that's placed here, comes before an operator that's placed here and so on and so forth. But you see, I can write down these correlators. I can give you a time ordering. So I'll call that the contour time ordering. But there are lots of correlators here because each operator can appear in 2K places and there are N of them. So it's a vastly redundant set of information. 2K to the N is much bigger than n factorial once you pick K sufficiently large. So this means that not all of these are independent and herein lies the constraint which I will extract, which I will tell you now, but I'll extract and put to use in the next lecture. There's another basis which is useful and often used in non-equilibrium field theory, which is the average difference basis. It's nothing very profound. It just combines the left and right into an average by taking the average of the two and the difference is just the difference. So and my convention is to average sequentially average one right and one left, two right and two left, three right and three left and so on. Other linear combinations are possible, but this one is inspired by what is usually done in the Schminger-Keldisch and it's sufficient for most purposes and it gives you some useful ways of thinking about things. So going from here to here is just basis rotation, but there's still two to the N of these because there's two to the N of these. And then a final set of correlators that's interesting for physical applications is simply, it's still in a single copy theory, but instead of just writing down the words of N operators, I sort of split these words into commutators and anti-commutators. So you can put a commutator here and then take an anti-commutator with O hat sigma three and then nest them that either a commutator or an anti-commutator. This is just again some dressing of these guys and the reason one does that usually is that when one measures response, it's useful to see whether something precedes or post-seeds the other. So commutators and anti-commutators do a good job of sort of picking out what comes before and what comes after. So for example, you know this already. So at the level of two point functions, you can either have a commutator consistent maths. I'll convince you later that this with an appropriate dressing measures response of O two perturbed in a system perturbed by O one, whereas this one measures fluctuations on top of that response. And usually the reason this is not quite the response I have to put some time ordering gadget here but we'll do that in a second. So insofar as quantum field theory is concerned, you can do all of this and all of that is just a rearrangement of data into some useful form. This is what we really want. This is what we really measure. This is what the contour naturally wants to do. So in the last five odd minutes, let me tell you some facts of how, oh, I didn't count this. So exercise for you is to convince yourself that this is also over complete. There are too many of these. There are two to the n minus two times n factorial of those. So I'll have you check them. It's an interesting accident for two point functions that they agree but that's right because the two point function is labeled by symmetric and anti-symmetric parts. You sort of also know this higher dimensional tensors are not quite given by symmetric and anti-symmetric parts. So last thing I do before I stop is give you these relations in some useful form. We see that we only have n factorial observables but every other way of thinking about them gives us too many relations. One would like to connect up between these different pieces so that you can just use the contour and then evaluate what you want. So let me start by saying that there's a way to go from the nested to the white man basis using some generalized Jacobi identities. So Jacobi identities basically give you all the relations you need. But they're not the usual, they're more than the usual Jacobi identity. The usual Jacobi identity you know in terms of commutators which says that the commutator is cyclic, nested commutator is cyclic. But now you have two sets of brackets. You both have a commutator and an anti-commutator. So as you go to higher orders you get more relations and they take care of all the relations that read between the two to the n minus two times n factorial of these to convince you that they're exactly n factorial linearly independent ones. That is some simple vector space analysis. So I won't go through that. The average difference, the left right are very easily related by basis rotation but they can, the average difference can be mapped into the nested correlators by so-called childish rules and generalizations thereof which I will describe in a second. The left, right or average difference correlators can be mapped to white man by a canonical embedding which I will explain. Okay, let me just do the childish rules because that's all I'll have time for now in the next two, three minutes and then postpone this canonical embedding to the next lecture. It's very easy to derive the childish rules but let me just give you the answer. A contour ordered object with some sequence of average and difference operators. Since it's contour ordered I don't have to tell you whether it's left, right or where it's coming. It's just right down the operator and at every point you put an average or a difference. This guy I claim is given by the following. It's given by all permutations of the following sequence of sets. So some number of these operators are in the first leg, alpha equals one. So you can think of this sequence as sigma alpha belonging to S and alpha. So some number of these operators are in the first leg, some are in the second leg, alpha equals two, some are in the third leg and so on. Among them you can permute. So the top and bottom in the one legs you can permute. Two left and two right you can permute and so on. So these are the permutations and then sigma k belongs to SNK. You stick in a time-ordering theta function for every such index and then you multiply this time-ordering theta function by the following object which is a nested bracket of the following form. Call this the Schwinger-Keldisch bracket and I'll tell you what this bracket is in a second. The brackets close at some point. The Schwinger-Keldisch bracket is defined, is designed to do the following. It's designed to taking an average or a difference operator and spitting back an operator in a single copy theory. Its action is defined as follows. If you give me an average operator, spits back, sorry. So if there's some something here, there's a single copy operator and then you take the Schwinger-Keldisch bracket of it with this average operator. It gives you that anti-commutator. Oh, and I should say that my anti-commutator has a factor of half. To avoid the fact that averages have a factor of half. And if it's a difference operator, it gives you a commutator. These rules are sufficient because you give me a sequence. I take all permutations. I nest these guys. And then I come in here and look. Identity bracket with something. And identity bracketed with an average. It's just the average operator itself. It's just the operator itself in the single copy theory. But identity bracketed with a difference is zero. And apply this rule sequentially. And this correlator then becomes some sequence of nested brackets of some kind. And I will show you this in just one example. And then stop, I'm slightly out of time. Let's do this for two point functions in the Schwinger-Keldisch theory. And then I have a contour-ordered object of A average, B average. I have two permutations, A, B, and B, A. So let's say this is the time tA, this is the time tB. So I have theta AB, which simply means it means that tA is bigger than tB of bracket, bracket, identity, A average. Average plus theta PA, which happens when tB is bigger than tA of identity, B average, A average. But the first term, theta AB, A hat, B hat. The second term, theta BA, B hat, A hat. But the theta functions sum to one. And so this is just an anti-commutator. Similarly, you can convince yourself that A average, B difference is now theta AB, commutator, and with that, I'll stop.