 The idea was to start with a new topic that will be lectured by Thomas Prosene from the University of Ljubljana. Thomas has been a very inspiring person. He is building the field of the field of Integrability and understanding chaos and putting Ljubljana on the map of non-equilibrium centres in zelo smo prišli do vseh lektiče na dušne unitorije, vseh vseh kaosov. Čakaj, Zala. Dobro, Zala. Tako, Zala. Zelo smo prišli do vseh nekaj korsi, kaj sem prišli vseh nekaj korsi. Zelo smo prišli do vseh nekaj korsi, kaj sem prišli do vseh nekaj korsi. Zelo sem prišli do vseh nekaj korsi. Pke nog zelo smo prišli do vseh nekaj korsi, drugaj pa se zelo korsi po svoj sekratov. Tako, biš ... ... snam u novom furtavku, biš ... ... se vseh nekaj korsi Adam... ... če se se štitele svojoj krasne, ... zelo se se štitele na sehrej ... o zelo stajelje, zelo se do vseh krasne, ... i kako razdižilo dovrživ support, ... začali se kako počke nekaj korsi vseh nekaj korsi. Da smo vse tihaj korsi vseh nekaj korsi ... ... ješ adalji, If we understand these two words, it will be fine, because I put it as an oxymoron on purpose. I don't know if you see that, that is an oxymoron, right? Chaos is usually contrary to integrability, and we can discuss for whole series of lectures, what integrability means, and I think we will not be clear after a couple of hours, because we can confuse ourselves on and on. Of course, these precise mathematical physics apparatus, which allows us to work with integrable systems, but still it's very elusive how to define integrability. So I would like to also claim that what I will speak about in these three lectures will be also something like integrable chaos, because we'll use some algebraic tricks, how to solve dynamics of many body systems in a way which is similar to what we can do in some integrable systems. I mean, there is some algebraic tricks behind, there is some cancellations, which happen because of some magic. And, you know, magic is never good, because it says that, you know, it's not real world, right? Well, still I want to convince you that this might smell a little bit like real world, I mean, at least we hope so, but even if not, I mean, it's great to have some example of exactly solve the dynamics, even though it's a very special one. And if you can go a little bit beyond, I'll tell you what, I believe can be done beyond this, so we can maybe say something about more real world problems, which is always hard, right? OK, so that's our title. So, I will spend probably half of my lecture today basically to introduce to you to 10s or 12 circuits, right? I'm not sure actually, yesterday I had to leave to Ljubljana, so I'm not sure how much someone told you about circuits, as I believed he thought about random circuits, right? I mean, you heard already the basics of circuits, even though I don't want to assume any kind of any specialized pre-knowledge, so I will try to go slowly, because I think, I mean, these two buzzwords, right? Circuits and 10s or networks, which will be kind of the graphic language to work with circuits. These are extremely important in this field of physics, and it allows us to do very powerful formal manipulations in very kind of elegant and simple ways. I mean, I would even claim that the notation of 10s or networks is something that competes with the rack notation, right? I mean, we learn the rack notation when we learn quantum mechanics, but it's so super efficient, right? Sometimes you have to convince your friends mathematicians that it's useful because they hate it, right? Because it's so nice, right? It's just a couple of rules you have to learn and you can forget about mathematical meaning of these objects, right? 10s or networks, the grammatics of 10s or networks is even worse, right? You just learned that the rules, and then you can work with it, even forgetting about what are really the objects behind, right? So what are 10s or networks? So 10s or networks are basically graphs, right, where nodes of the graphs are 10s ors, and then the edges basically correspond to contracting indices of 10s ors, right? So now, like, if we have two matrices, right? A, i, j, and B, i, j, then the product of matrices would be another matrix where you put A and B here. This is i, this is k, and this is j, right? This is just a product of matrices, right? Now, usually in this field we have 10s ors of higher degree than 2, so we have more than 2 legs. So, for example, usually it's very common to work with 10s ors, which have 3 legs. Let's say i, i, j, k, i, j, k. Sometimes I will write indices explicitly, but later we will stop doing that. So we'll just write 10s or network, and all the dangling ends will correspond to 3 indices, and all the connecting bonds will correspond to indices, which are summed over, right? So a very popular version of a 10s or network is a matrix product state, which is composed of this type of 10s ors with 3 legs. The leg, which stick out, the perpendicular leg is usually something special, which is called a physical space, it corresponds to physical Hilbert space. And by the way, I mean, I don't want to formalize too much, but of course, I mean, you can easily imagine each index in linear algebra corresponds to a vector space, right? So the number of indices that 10s or has basically corresponds to the number of 10s or products of Hilbert spaces that it acts on, right? And indices, you could think of this a to be an element of cd cross cd cross cd, right? Each index corresponds to one element of 10s or product space. Now I assume that all these factors have equal dimension, it's not necessarily so. For example, I mean, this vertical indices might have different dimension, they are corresponding to physical space, this horizontal indices correspond to auxiliary space, and this type of 10s or network then corresponds to this super popular object, which is called matrix product state. Now you may want to contract it at the end with some rank one, or degree one 10s or, which is a vector, left and right vector, and then you get just a comp, which has just this vertical free indices then, which corresponds to a many body state, right? Now here one, two, three, four body system, right? Four sides. I mean, of course this is super short introduction, so I don't want to elaborate further on this, but we will go straight away to circuits. But I just want to tell you, I mean, to stress that we will think of circuits also as 10s or networks. It's very useful, even though you might not, I mean, again, circuits are some special variant of 10s or networks, but it's useful to think of them as 10s or networks. Basically, they become less, they become more symmetric object, right? You don't have to worry about which direction is the time, right? Okay. Now this will be, I mean, I will usually not erase myself, but this was just like a really short prolog, so I will erase it, but we'll now start quantum circuits. So what is quantum circuit? Again, this is not a quantum information, quantum commutation lecture. This is a lecture on many body physics. So for me, quantum circuit has a slightly different meaning than for a quantum information scientist. Slightly different. I mean, I think of quantum circuit as a dynamical system, and I will explain you. And this is something that I really want to elaborate in some detail. So what do I mean by dynamical system here, right? Because if you want to speak of chaos, I mean, you have to define it sort of quite precisely, and here I will probably use a lot of, I will refer to a lot of claims or statements that Anatolij Palkovnikov gave in the first day of the school, where also he went quite deep into the subject of ergodicity and chaos. So I'll try to be kind of slightly more precise on these notions, and for that I have to define the dynamical system, right? I mean, what I mean by dynamical system. So quantum circuit has a local discrete time quantum many body dynamical system. Basically, dynamical system is a box, right? It's a deterministic box in which is able to provide you with time evolution, right? So it has a facial motion inside, right? And it can run dynamics for arbitrary long time, right? So you basically provide it in the initial state, and then it provides you with the output. So for me then, I mean, what is already implicit here that I can run it for a long time. So it's not just one short event, but it can run repeatedly, right? So I will start then with a basic object, because I will stress a couple of things here. I insist on locality of this discrete and on discrete time dynamics, right? So now just if you look at textbooks, again, there has been during this school, again, some references to dynamical systems theory, like in professor Jao's lectures. I mean, he referred to classical dynamical systems theory, to logistic maps in classical chaos. Again, I mean, these all objects are precisely defined in the context of dynamical system theory. So this is the kind of deeper side of mathematics of this type of systems. But here, actually, I try to kind of marry these two things, right? Dynamical systems with quantum circuits. So first, we start with main object. We start with a Q-Dit. Q-Dit is a local Hilbert space. For me it will be Q-Dit. All my examples will be for Q-bits, which means spin one-half. But just to make the story sufficiently general, my single-side Hilbert space will be Q-Dit. Often people write its dimension as Q. I prefer to use D, which is immediately suggesting that it means dimension. Doesn't matter. Just recall that it's D for me. And then I will define these are the two concepts, and the second concept is the gate. And the gate is a two-particle interaction, which means it takes two states of two particles and spits out another state of two particles. So u is an element of tens of product of two single-qubit spaces. So it's an element of space of endomorphisms, meaning linear maps from a space back to itself. So it's a matrix of, I mean, you can also think of it as a unitary matrix. So it becomes a unitary group of d-square by d-square matrices. Okay, that's my gate, right? So, of course, in quantum information, people discuss what possible gates, for me, all these two, all these three lectures, I will only think of a two-qubit gates. I mean, single-qubit gates, well, I mean, sometimes I will mention, but you know, it can always be put together with a two-qubit gate. So this will be, for me, the fundamental interaction. And I will write it as a tensor, which is 24 leks, right? Let's call them i, j, k, n, l. And time evolution, for me, usually will go vertically. So this will be like a gate, which will take states of two particles, i and j, and it will produce an amplitude to end up in states of these two particles, k and l. So it's like an s-matrix. I mean, in kind of in more like spirit of energy physics, this could be considered as an s-matrix. Of course, for us it would be simply a unitary matrix, which provides local in time, local in space evolution for one step, right? This will be one of the basic steps. So, I mean, I will not use Dirac notation. This is probably the only time I'm using Dirac notation, but just to kind of remind me what I really mean. And then when I have to write a matrix, I will write it like this, that lower indices will be a row and upper indices will be a column. I think this, right? Sum over i, j, k. Now, what we all know is that you should be unitary, which means that... What does it mean? That u dagger u has to be equal to identity, and now if I write it in components, this means that if I sum over k, l, u dagger, k, l, i, j, u, k, l, i prime, j prime, this will be equal to delta i, i prime, delta j, j prime. Okay, that's kind of... Of course, obvious to all of us, but you see, I insist then that even unitary it should be useful to write, to spell out explicitly, because in the language of tensor networks is a simple contraction of a simple diagram. So, how can I write now this formula in terms of a diagram? Well, basically, this is a contraction of two tensors, two boxes, right? One is u and the other is u dagger. I start with i prime, j prime, here, right? And this is k, and I will now still write explicitly the names of indices k and l, and then I go to i and j, right? And what this equation means now is simply that I can contract this diagram, so which means that these two guys is like particle and antiparticle, they emulate, right? So then there is just a free map, I mean identity map, right? I, j prime, j, and if two indices are connected with a line, which means they have to have the same value, right? This is like a identity matrix. It's a tensor, it's a tensor which corresponds to identity matrix, okay? So this is the two, this two is the product of this two product, right? So it's all clear, right? If anyone has a question here, I mean this is the language we have to use. Yeah. What is geometrically local? Yes, it's geometrically local in this sense, yes, yes. You will see when I build the circuit now, I'm still not there, but you will see immediately. Yeah. I'm not using any density matrices yet. I mean I will use later if you want, as you will see this is discrete time dynamics, so the only density matrix which makes sense is the infinite temperature state, so I will use the threshold state or the infinite temperature, GIP state or fully mixed density matrix, but it comes later, so. Okay, so now I will basically introduce a concept which probably you have mentioned already yesterday, I'm not sure because I was not here, but of a brickwork circuit. So that's a particular geometry of a circuit which allows all qubits to be eventually coupled to other qubits in a local fashion, right? So, and it really mimics what physics does. I mean it's right. Okay. All right, so now I will define, I will define now a spin chain and spin chain now is a Hilbert space of L qubits and I will assume that L is even for convenience. I don't have to, but for my purpose it will be useful if L is even and also I will assume periodic boundary conditions again for convenience I wouldn't have to, but for most of my discussion actually L will be a large number for claims which would be exact. L would have to be a large number so we would have to take tabozolemic limit but even that is not crucial for most of the statements we make, so that could be finite, sometimes sufficiently large, but finite. And then I will define a simple tensor product of gates, so this will be like a one layer, a layer of gates, which I will now define as ue which be u tensor product L over two times. And now again I don't want to get to abstract so I will immediately write a diagram which does it, so now this is of space of linear operators over the whole Hilbert space, it's a unitary operator and now what does it do? It takes, what does it mean tensor product? Tens of product for us means just put one picture next to another. It's again very clear, tensor product is just combining pictures of tensor networks. So that's tensor product of tensor networks so now this is one, two up to L and this is what I will, maybe I should, sorry, allow me to write this picture a little bit lower, I will use this space and I will write just four boxes, not indefinite number boxes but four just for graphical convenience and you see what I do, it also becomes clear slightly later in a moment but contrary to most of quantum computational literature I write these wires under 45 degree angles where people usually write circuits just straight wires and then they write gates. It's all the same because both can be networks but for me, since I really want to insist, these are like very symmetric tensors, for the symmetry purpose I will write also these wires under 45 degree because later I will look at it as a circuit from sideways direction. But anyway, it's not, for that moment is not important and then I will define another kind of a map which will be a shift, pi. So pi again is a endomorphism, right and what it does I mean it's enough to specify how it works on a basis and basis can be as a product it basically shifts cyclically shifts by one. So then I will define what I call u what I will call u-aut and u-aut will be pi u-even pi inverse. So it will be conjugation with a shift and what does it mean? It means I have to just rename the wires so what is 2, 1 now becomes 2 and what is 2 what becomes 3 and so on so which means that if I write u-aut u-aut will be simply at that time is a product of gates which sit here on this other set of wires and then I'm coming back home, right now since there are periodic boundary conditions I have to be a bit careful there has to be four boxes but one is already coupled to the last ok, so this guy is the same as this guy these two ends are identical and then I and then here I have to come here right and now this is again 1, 2 up to L, ok so now why did I bother with this? I mean of course now I could just say people who would explain to you brickwork networks they would just say ok now let's write this staggered constellation of gates and we call this a brickwork circuit, yeah but it's nice also to see how it is generated it is generated by a simple tensor product and a shift, right ok, so now I will define the full generator so I will now this generator now is split into pieces, this one is u-aut this one is u-even and the composition of 2 is called u so u will be u-aut times u-even ok alright, so now I can define dynamics so you know if you look at how mathematicians define dynamical systems, they have to have a map and they have to have a space of states space of states for us is a Hilbert space of Hilbert space vectors so now this is dynamics now defined as psi of t psi of 0 so my writing is not beautiful, right, so I will want to insist a little bit that I write a script u for the many body evolution and the roman u for the local gate, right just to make sure we distinguish a little bit these 2 things because they are fundamentally different one is the many body, the full many body full-fledged many body dynamics and the other is the local one now to make things also connotationally convenient I will define this time precisely so I will define this u of t for odd times as u to t and for so for even times as u of t and for odd times as u even u odd, u odd, u even, u odd, u even so as u to the t times u odd this is definition so it's a staggered dynamics, right means it has fundamental building block of size 2 so there is staggering in space but there is also staggering in time so this will be a time 1 and this will be in time 2 so time goes vertically it's a floky dynamics of period floky drive period 2 2 discrete units, right now I define it as a floky dynamics because then I repeat the same thing maybe I should say also a little bit on what I assume about translation in variance I also assume that all the gates are the same so you have seen so that this dynamics is translation invariant and time translation invariant so we can again discuss for example the question of time crystals I will not go into that but these are kind of perfect models for starting time crystals as well now the question is of course we have to break it ergodicity as you have learned from Normaniya also how there might be ways one way is integrability if you want to break it with some disorder then you need of course to break translation invariance but as I say I will not go into that I just want to make a reference to other lectures ok so for me for simplicity I will assume everything to be translation invariant and almost everything I will say immediately generalizes to completely inhomogeneous circuits so everything could depend on time could depend on space most of the things I will discuss today ok so I will I don't know if I can use colors here but since I will later use a sequence of these gates it doesn't really work ok never mind so let's say this is this type of period will be green this one will be red ok I know but that trick to scholar like this doesn't work ok so ya sure almost no I will convince you that I don't need to assume randomness to write some analytical understanding of dynamics it helps to have randomness as you have seen yesterday I think it helps very much and I will tomorrow also I will try to the second part of my lectures maybe I should really say so in order to give you some sort of direction where we are going the first part of my lectures will be computing correlation functions of dynamics for that we don't need any randomness for that we just need some algebraic tricks for the gates which turn out to be quite interesting and allow us to compute many things not just two point functions that outline the technology for that no randomness needed now tomorrow I will try to also go to spectral form factor so to compute spectral correlations to compute spectral correlations we need to assume a little bit of randomness it's just like a bit of I don't know homeopathy or something like that you need a little bit of something and then you can almost reduce it to zero but it's important that it's there ok now let me make some examples I have not seen this I guess most of you have seen it but this of course are by no means exotic concepts it's something that is there for anyone who does quantum Monte Carlo or DMRG or any tensor network based matrix product based methods even when one does continuous time dynamics one encounters this sort of brick work circuits for example when you have particle nearest neighbor Hamiltonians nearest neighbor Hamiltonians this is 90% of examples in condensed matter condensed matter physics is like that then from this when you do the trotter limit so u becomes e to the minus i tau h and tau being a small number whatever that means it will be small enough compared to the norm of the Hamiltonian and then you can prove that this thing can be approximated by u odd times u even up to accuracy which is quadratic in tau so that's the most naive the composition called trotter Suzuki the composition of dynamics into steps and then this gate now you may want to put indices and I also have to explain what I mean by that so if I don't write indices I mean this is a 4 by 4 matrix right but here I have indices which means this guy acts non trivially on sites 1 and 2 or j and j plus 1 but this already implies that there is just dimension 4 now if I put indices 1 and 2 this means it's embedded into a bigger herbit space but it acts non trivially only on 1st and 2nd tensor product copy so that would be our gate right if you want I can write it like this ok and then of course sometimes you would write I mean with this embedding I mean you can write u i j or j j plus 1 meaning that this is identity which acts on the first j minus 1 sites and then there is u and then there is identity which acts on the last l minus j minus 1 sites right so the sum j minus 1 plus l minus j minus 1 plus 2 has to be equal to l which is then you can write if you want you can then write our tensor product of let's say even layer 2 j minus 1 comma 2 j now a more interesting example at least scientifically more challenging example and much less generic so this will be the first example the second example would be unitary 6 vertex model there is integrability in the title of my lectures and this will be this 10 minutes will be kind of an example also how integrability works in this quantum circuits so this will be not the same integrability as integrable chaos but it's honest beta ansatz integrability I just want to show you there is a connection between quantum circuits and standard story of integrability in statistical physics a la beta ansatz Young-Baxter company so for example you can write a 2 particle gate like this I mean this 1 always means a unit matrix of appropriate dimension let me write a 4 by 4 matrix like this where this p will appear quite often in my lectures that p is the so called permutation matrix or a swap gate which flips the 2 particles now here also I will assume that d is equal to 2 for this example so my map sometimes I say matrix 4 by 4 4 by 4 meaning if d is equal to 2 otherwise it's d squared by d squared so here I insist that this should be equal to 2 then I have 2 qubit gate which is totally trivial which just swaps the qubits so it has 4 nonzero elements which means ij is ji and this is identity so it's a combination but not convex complex combination of identity and swap in a way that it's unitary u of minus tau is u dagger of tau because if I make a Hermitian conjugation this is self adjoint so I get just minus tau if I make a Hermitian conjugation and then if I write u of minus tau times u of tau then I get identity because p squared is equal to identity the rest of the proof is an exercise I think most of you will see it but for those who don't see it it's a good exercise just plug this in you just need p squared equal to 1 it's kind of really cool because you have a very simple gate which is unitary which is just a rational function of some parameter and actually this turns out to be just 6 vertex model so I can write it as a a is just 1 over 1 plus i tau and b is i tau divided by 1 plus i tau ok, so that's 6 vertex model in statistical mechanics again, why is 6 vertex model because you haven't seen yet how this would look like but you can imagine if I just continue this in the plane then I have a 2 dimensional tensor network 1 dimension is time so it's a 2 dimensional tensor network which is exactly what in statistical physics called a vertex model so if you want to compute some scalar out of this you press some boundary conditions here you press some initial state here and you compute the amplitude with respect to some final state then it becomes a number no free indices anymore this number is just a partition sum of a start make model in 2D I will go a little bit further into that tomorrow when I will discuss spectral form factor I stress there is a lot of close links between doing circuits and tensor networks and doing start making 2D if you think of local gates and so in 2D this becomes a 6 vertex model this is 6 weights but it's a special kind of 6 vertex model because the weights are complex so people like Baxter would not be happy because they would like real positive weights but still they provided a lot of results to us which we now can use so this can we just have to kind of do the weak rotation if you want to go to complex plane the weights become complex the gates are no longer positive but they are unitary but they are still 6 vertex so what does this bias for example this means that this model is really integrable and what does this mean I will just this is an integrable model I will call it XXX because it is intimately related to what is known as XXX Hamiltonian even though it is not a Hamiltonian it is a floke or trotarized XXX model it is integrable why is this integrable because this gate I will drop now XXX for convenience but this is just permutation times what people in integrability call in R matrix I call it an R check this is the famous R matrix I will now just bear with me but I will just throw two identities from my sleeve which are very interesting but I will not prove them I will just leave them as an exercise I will quickly say how to prove them but I will not prove them but they are at least as a teaser maybe interesting so it is a way how do we know that this model is integrable and what does it mean that it is integrable the fundamental feature of integrability is the local conserved charges or in other words equivalently that it has a transfer matrix which is in evolution and commutes with time evolution so what is the transfer matrix here so this is an R matrix what is the transfer matrix transfer matrix now is a many body again is a linear many body object which I will write it again as a tensor network I have periodic boundary conditions and now this will be my R and I will write here tau over 2 and here lambda lambda minus tau over 2 r lambda plus tau over 2 and so on the last will be r of lambda minus tau over 2 I write these r's in the skew fashion because I want to read it as a gate like this if I read it as a this gate like this I mean this is an R matrix but now I have to use it like this so there are two spaces now one is horizontal one is vertical both have dimension 2 but the one I think of as a auxiliary space I usually people call it a as like an zila or auxiliary space and these are physical spaces 1, 2, l ok so now this transfer matrix let's call it t of lambda and now tau is a fixed parameter but lambda is a free complex parameter I will write it now as r, now r has two indices 1a lambda plus tau over 2 r 2a lambda minus tau over 2 up to r 2 spaces first one I call last one l minus 1a lambda plus tau over 2 r, l, a lambda minus tau over 2 ok now this acts on a each space so we have products of 1, 2, 2, l so l copies of physical space times auxiliary space but then as I say I will connect here these things so I will take a trace over auxiliary space and now what we can show and this is an exercise I'm happy to discuss this in the break for anyone who is interested but it's not difficult but it's still if you show it you learn quite something so exercise show that this u of t can be written as t of minus lambda minus tau inverse times t of lambda tau and what you need is this identity is this unitarity property of the gate which also means unitarity of the r matrix or what people call r check matrix and the second thing is the young backster equation meaning that this r check satisfy equation like this ok, now I have to also specify here parameters u, u plus v v, v u plus v u ok, yes sorry, what is the question again ah, yes, sorry well anyway, I mean this is something that you get in 2 seconds when you type in Wikipedia, right so it's a standard thing, right it's like an advanced exercise for those who want to understand integrability and are happy to discuss later but it's kind of sidetracking my lectures so I don't want to go into deeper into that just the point of this kind of last 10 minutes was there is a whole family of circuits which are connected to integrable systems like vertex models in statistical mechanics yet they are integrable so you can play all your interesting games for example, I mean last November there was a very interesting article in nature by Google which was already quoted yesterday by Google Group where they implemented exactly this circuit on a Sikamoto quantum computer and they have been able to show integrability so they have been able to show anomalous features which are related to integrability like stable quasi-particles long relaxation times yeah, it's a good point I mean lambda is a free parameter so tau is a fixed but lambda is free so you see, I mean what I have to do now, as you see here so tau equal to 1 sorry, if I put lambda equal to 1 then I have every other term where spectra parameter becomes 0 and when it's 0 then our matrix is just permutation and this is important in order to provide these two pieces of the evolution so it's really the main trick how to derive this right? this is the fundamental identity so I wrote it so the next thing of course is to show that t of lambda times t of mu is a commuting family so this is equal to 0 for any lambda mu so that's the key feature of integrability, integrability means that it's a commuting transfer matrix but now it also generates the time evolution so that's the standard thing for those who have smelled integrability for those who haven't maybe can forget about this because the rest of my lecture will not be connected to that but what is key, right is to have a commuting transfer matrix so dynamis has to be generated by the transfer matrix usually is just the logarithmic derivative of the transfer matrix is the Hamiltonian in our case it's a finite kind of finite time log derivative if you want this is kind of finite time log derivative and again generates not the Hamiltonian but the fundamental, the primitive dynamical map the flocking map that is our paper with Lenard Mathieu Vanekat from 2018 it's Mathieu the first author it's a PRL ok, now if there is no more question on this second example I go back now to our non-integrable circuits and try to put some other additional structure on them but before doing that let's now try to discuss fundamental observables that we want to compute because now it will be useful to write away define objects that we want to compute and then we'll see how to compute those objects in additionally structured ok so, two point functions we'll start with two point functions and tomorrow as I promised we'll go beyond but today we'll probably finish with those so now let's think of local observable so A would be a local observable which means that A can be considered as a D by D matrix it's an element of space of matrices over a single particle Hilbert space, so it's a local observable and now we embed it HL S and we put an index J or let's call it J and then which means I already wrote once how I embedded the two qubit gates but now I embed local observable in the same way I mean I hope you understand this notation I put tens of products in the exponent I mean I have multiple tens of products of identical copies of this object that number of times so it's just an identity and A at the right place so now how the tens of network would look like tens of network would look like this to J J minus 1, J plus 1, L ok so this is almost a unit matrix almost an identity except that at J side it does something observable is an operator operator means it goes now let's define Heisenberg dynamics so I'll define A J of t, U of minus t A U of t where U was defined I erased it but yeah, you remember I think it would be for reference good to rewrite it again U of 2t would be U 2t and U of 2t plus 1 would be U t times U oth so U remember U oth, U oth U oth, U oth, U oth and then you end up with U oth when time is odd ok right now how does the tens or diagram look like so now I will try to show you there is some use of this type of tens I mean this contraction rules like unitarity we can already use it to simplify tens of network for A J of t so let's do that now to simplify life a little bit I will first I will paint filled bullet for an operator because the empty circle I will use for something else so please even though my writing is usually very ugly I mean be reminded that this is a kind of painted bullet there are two big boxes like this and this box is U of t and this is U of minus t right so I might teach you to think of Heisenberg dynamics right you do forward-backward time evolution first forward implement observable then backward and then if you sandwich it sandwich it between two states then it's just the same as if you if you propagate the state but now something nice nice happens now in order to see that maybe I can really write just to see it 4x4 circuit then we will again stop using this but 4x4 maybe fast here maybe I missed some wires but maybe not consequential I hope then so it's like 8 8 qubits now I put a bullet here and I have a joint of this guy 4 ok this is like a mirror making sure that the top is a mirror image of the bottom across this horizontal line but I might also worry about periodic boundaries but that's not too crucial now I will now think of time being L to be sufficiently large so I don't have to worry about periodic boundaries I mean most of you have seen this type of picture so for those of you who haven't maybe there is none of you like that but you see I mean the point is now when you contract this guy against each other then these local gates face each other except here, which is shadowed all these guys face each other so then I can use these contractual rules which is unitarity so this means this guy is like this and then this guy can be contracted like this and then at the end of the day if you allow me you see that everything which goes into the shadow which is made by arrays which go with the speed of light or with the speed 1 in this case everything which is outside of this shadow can be contracted so you get basically a tensor network which is not like this but which is like two triangular shaped propagators so again schematically I will try to be as schematic as I can exactness but you could now think of like this and now this is your A this is now your again it's not the same as you but you see the point is now that becomes already half of the gates becomes completely redundant I can remove them so just unitarity and locality give me causality it's again you have some lectures in fundamental courses in physics which sell this as a very deep thing in this case everything is obvious that's why I like this this circuit so much because most of things in physics become really obvious once you write this as LEGO cubes and learn how to manipulate them every Hamiltonian evolution well it has to have local interactions indeed indeed it is also basically intuitive explanation of LeBramen's when you do Hamiltonian you have to worry about the remainder of the Torter Suzuki and all that but you could just try to prove Ripper Robinson using this strategy but now we will not worry about that we will think of exact circuits not Hamiltonian evolution so this becomes exact so as Alessandro mentioned buzzword LeBramen's on bounds for those of you who have not heard of this this is a general statement in local interacting Hamiltonian dynamics on discrete lattices which says that there is an emergent causality that means that correlations will not propagate faster than some upper bound in this case this upper bound is just speed 1 means one side per one unit time and not more now I'm still not there I mean I'm now probably accelerating myself but I can really show that there is nothing beyond that again for the smart among you it's obvious but still it's not obvious really I mean I have to write it down but what I will now write down is something that will show that indeed correlations can only be non-zero so what is the correlation first I have to define it what is the correlation now for me the correlation function will be an object which will take two local observables let's call them A and B I start with A and then measure B then I measure then I measure the response in B so this is like if you want this observable is like a local quench it's like an excitation to a density matrix and then you make time evolution then you measure B so that will be like a linear response you make a small local quench and then you measure B and then you measure it in a fully mixed state because there is no other state around everything is highly non-equilibrium so this is in general conserved so there is no Gibbs state the only Gibbs state which makes sense is beta equals 0, infinite temperature which is what mathematicians call a threshold state so it's completely democratic average over the Hebrew space so this is what I will define as and then I will have first let me make some so then I will measure observable B at position Y and observable X I will excite at position X and I will propagate to time T so that's the most general thing you can think of which is a more general local two-point function you can think of and in the next let's say 30 minutes I will describe to you how for a class of dynamics which is not so I mean at least not so trivial you can compute this in terms of a simple Markov chain so that's the kind of the the next goal is to write explicitly dynamics for what I will call dual unitary dual unitary circuits explicitly this many body correlation function of two local observables in terms of a single body quantum channel so there is this connection which I hope you will appreciate it's really kind of nice dual unitary I'm just teasing you so I will define what I mean by that it has a meaning that it is by dual I mean that you flip space and time so that was the first so I'm referring to this space-time duality so what I will mean by that is that the dynamics is unitary both when it's propagated in space or in time but let me just not accelerate myself and just go slowly so now this object now is just to spell it out and then I have to write of course the proper normalization here sorry for that in order to make these things defined this is just trace of this is again what some people would write a Hilbert mid-product between two observables A and B now if you have nice observables like magnetization which is like Pauli matrix which has squared unit matrix right squares to unit matrix so then this would be 1 z then c sigma z sigma z x x 0 will be 1 so this will be your kind of normalization of the correlation function right in this 1 over d to the L it's like the correctly normalized infinite temperature state ok so now what is it now if I compute this correlation function meaning that if now suppose that if y minus x is larger than larger than t but of course it has to be still smaller than L so which means system has to be larger than L system is larger than L so larger than t meaning that now this is again I have to write like this well maybe I go here this means that I can now write the second write an observable now here will be B but now this y could be here so this is x and this is y if x minus y is larger than t meaning that this guy is in the empty space here so now what do I mean what is the correlation function meaning that I have to multiply this I mean this is what I have to do right but now you see this decomposes into a product into product of some complicated beast here well which is not complicated at all because now we can use again unitarity in this direction and this becomes just trace A and this thing is a trace B so this is just trace A times trace B so from this picture it's obvious once observables are separated enough then the correlation function is a product of the traces which means product of the averages statistical averages now I will assume that my observables will be traceless which is a very clever assumption because the rest would be stupid because anything that has a trace can be subtracted and has a contribution which is not a single particle contribution or just a scalar so the honest connected part of the two-point function is just coming from traceless observables so I assume that observables are traceless then this is zero so whenever observables are separated enough this is zero so the only interesting correlations happen when the observables are in the mutual light cone okay now am I doing the time? I think I will be I just have to come to do unitarity today so right so now I will go to the next which again has been around for several years now in various manifestations in various papers which is called the folded picture so this folded picture has different names in high energy physics it's called thermofil dynamics and so on and so forth it's just an idea that you can treat Heisenberg dynamics as a Schrodinger dynamics in the large Hilbert space so instead of thinking of this kind of forward in time you could just think of going forward in time with a double propagator which encodes states which uses states which encodes operators and actually I made a simple experiment to show you how this works I mean I hope you see that paper that's how the Heisenberg evolution of local operators looks like and now what I will do I will actually fold it so now I will do it like this now this is U transpose U dagger sorry U and U dagger above and observable so now when I will fold then this lower guy or I can fold like this lower guy comes on top of the transpose guy so now I have basically evolution which is acting on a state but it has a tensor product a state tensor product of U transpose because this guy when I fold it it has to be transpose so it's U transpose tensor U dagger that generates this evolution I mean it's just the way how to memorize these things but I can write now formulas I guess it's useful to see experiment so I will define now the local operator space which is a space of endomorphisms over single body qubit space and that's the same that's isomorphic to two copies which means that I can write an operator ij and I can identify it with an object like this which is which resides in tensor product of two copies and now I will just introduce alternative diagrammatics so instead of this guy which is my a I will now introduce this guy when I will fold and this I will now just write like this ok and now the unit operator this is a unit operator when it folds it's just like this and remember now two wires have been now I mean in our papers we usually use different font different thickness of the line when we have folding usually these wires get thicker but in my writing this cannot be shown so just imagine that after next five minutes I will only use folded circuits so all the wires will encode operators not states but it's the same so now here again this is the same as this but with a circle now I have to be a bit careful with normalization I mean to make normalization a convenient one I will I will identify this with the same as vectorization as I said this trick appears in many different forms in the literature it's very useful for example now what happens with the gate now we have a gate which gets folded back to itself so it's two copies the copy behind is U dagger the copy in front is U transpose and now this is EII prime AJ prime KK prime and LL prime and now this is what I will denote as W and my W then will be U transpose cross U dagger so now I can simplify my correlation function I mean that's the idea the purpose of that is to have a very convenient simple representation of the two point function because before you had to have this forward backward contours so I just wanted a forward contour in going to discuss next I mean this is just simplifies things I define now a fundamental to keep the gate so I remember my node my font, my selection of symbols is like W is the operator gate U is the local gate W is just the local gate in operator form so now curly W script W will be then again formed now remember when I try to there's one thing I maybe have to convince you or I just give you an exercise because maybe it's too much to go into all detail but remember when you do this folding on a circuit then the order of the even and the odd layer interchanges so now when you write this guy which acts on a local operator then it has to first start with the odd and then with the even this is A now this is two layers I just explained here two layers so this was even odd and this was odd dagger odd dagger so you see now when you start applying it to an operator you start with the odd start with the odd then with the even so the order changes I mean it's something that is usually inconsequential but you know you can easily make typos in papers if you're not okay now again the same thing so you even is now even is W tens of product with itself over two times and now let's write unitarity again now unitarity is super cool I mean it's really simple what does it mean unitarity? just look at this and either contract these two wires and then it will be like u, u dagger and then this has to be the same and this has to be the same so contracting these two guys means that this has to be the same as this or of course there are two forms of unitarity u dagger, u and u, u dagger equal one so the other is that when you contract from above now these contraction rules simplify even more and now when you now when you can just look at normal correlation function again so now correlation function would be like this place an observable A at position X so this bullet is position X and it's A this is bullet at position Y and it's B and then everything that is outside this cone contracts from now we fold it so that basically we contract everything that outside this cone so I just leave it and also now we can do the same from above we can use either this unitarity using this unitarity means that we have here remember we have these guys here and we can implement unitarity up to the shadow which means we can contract away everything here think like this I will not waste your time to write in detail how the circuit looks like I'll just write the boundary condition circuit is a vertex model with this boundary condition except here there is a bullet because I can use the top unitarity top unitarity means this shadow so what I have now is basically is a vertex model in rectangle so what is left to contract and which is hard which I cannot say in general how to do is a classical start make problem on the rectangle of size now this is roughly this is X minus Y this is roughly Y minus X this is roughly T minus Y minus X and this is T plus Y minus X so these sides are connected to in O first of all time has to be larger than distance between Y minus X otherwise this goes to nothing but if it's larger than there is this cute rectangle which becomes a square when Y is equal to X when Y is equal to X then we have a perfect square which is the hardest case we have X here so now computing correlation functions is like as hard as computing partition sums of classical but even harder because there is no Monte Carlo than classical vertex models classical vertex models are complex weights and it's known in computer science literature it's known that these kind of problems are usually hard I mean sharpy hard even am I doing share well I continue for 110 minutes and then I take 5 minutes for questions any question? which part? I convince you that the computation of two-point function is equivalent to computing a partition sum of a vertex model but this vertex model the way I formulate it now it's a vertex model on the D-square dimensional vertices D-square because we had to fold but if you want to think in terms of simple Hilbert space then you can think of having two-sheeted vertex it's like two layers because we have folded and then it's connected here so it's like two pieces of paper this contains forward contour this contains backward contour this is A and this is B and they are glued here and glued here and glued here these bullets mean just like gluing I think upper sheet with the lower sheet it doesn't really bring you anything to know that this is like a two-layered surface as you will see now I will go to some drastic simplification by assuming something else and then we will be able to say a lot about this now before going there let me just as a convention we will define these guys ij as 1 over square root of D ij matrix element and this guy is again ij will be 1 over square root of D delta ij so please remember in order to make things consistent I have to put square root of Hilbert space dimension on each of these boundary vertices otherwise I have to put this normalization factor at the end so that I decided to put it locally so that I don't have to worry about there is just a convention and then after I take this convention then yeah this is really the diagrammatic of our correlation function extra correction needed ok, now to close for today I have to introduce something different, something that goes beyond this so far it was just a kind of reduction into circuits and then some networks so what I will do now I will introduce you to dual unitary circuits and tell you what is cool about them does it extend to two-side observer yes, I mean it extends to any observer with finite support it's essentially the same it gets a little bit complicated I mean technically it gets a bit messy but it's the same thing of course it gets all different when you get non-local observers which is all over the place then we have to rethink everything but here we are really thinking of what local dynamics can give us geometrically local ok, so now yes, dual unitary circuits so now remember our gate where we started was like this and this was T let's call this direction space X and now let's think of it as a gate which goes into this direction so let me define a gate which I will denote as utility which maps IK to JL so instead of going this way which map IJ to KL it goes like IK to JL so this is the same as UIJ KL so it means we just flip these two indices we organized these indices in the square and then we flip so just geometric reflection around the space-time diagonal flipping space and time and now imagine what would happen if you would ask that this utility is also unitary that it also defines unitary dynamics all the pictures I draw all they make sense if I think of time going sideways so what I will now want is that utility is also unitary so dual unitary means identity and utility utility dagger is identity so what is the diagram which would give us the second identity well, U dagger but now you have to connect this guy with this guy and now it's called IK and this K prime I prime because we did transpose and conjugation and now if this guy if this identity holds because now this is utility taking this as a prime, this as a ket then this should be equal to K, K prime I prime, meaning delta I prime, delta K, K prime and the other way around also for D so there is the other unitary which is U dagger U equal 1 which would go in the other direction now this looks all messy you can work with it we actually worked with that diagrams but then it turns out if you fold your circuit gets much more compact and indeed if you write these identities now again think of folding our paper so now this guy has to go on top it becomes U transpose U dagger and then this means you have to attach bullets on the sides this guy becomes W bullets on this side like this this guy the lower diagram becomes now we will call this contraction identity 3 and contraction identity 4 so we have 4 contraction identities 1, 2, 3, 4 2 unitarities and 2 sideways unitarities question is how powerful they are so what is the the purpose of this game is to try to contract your circuit you write your circuit as a tensor network as much as you can so to something that is efficiently contractable on a classical computer or even analytically and now going back to this general correlation function of course this is hard but it becomes easy it becomes easy when easy when one of the sides is small I think this is minus and this is plus so when x is equal to t because then it becomes this 1d 1d tensor network this is when degenerate case when y minus x is equal to t and that's easy that's polynomial complexity because it's just multiplying matrices these are just matrices you first contract this on the sides it's a matrix which has a row column so it's just iterating matrix on initial vector A and then finally contracting it with respect to B now what can we do for dual unitary circuits it so happens that for dual unitary circuits we can show that this is the only diagram which survives I think I will just close here for today I mean if you don't see that and I will show you tomorrow morning but it's I just try to first I mean I just say it in words and tomorrow I will just draw few pictures but you can see then if you have this diagram then if you use this sideways unitary the sideways unitary allows us to anul this diagram to show that if this A and B are displaced for less than y minus so for for less than t then if y minus x is less than t then this diagram can be reduced to a product of traces of A and B which is 0 so the only case in which it cannot be reduced to a product is when t is equal to y minus x which is where we get this diagram now I think I should close for today and maybe take some questions okay, thank you let's thank Tomas for that further questions well the historically it came from first thinking of this space time duality which was a concept which was first discovered for kicked easing model it was found at kicked easing model which is a special case of dual unitary circuits I will mention it more in detail tomorrow because the simplest model for which we can prove something about spectral correlations and kicked easing model is a model which is kind of self-dual which means that it looks the same essentially it's the same model if you read it space wise or time wise and then we stared at this and when we saw wow hey but this is just a special case general class of gates we just ask this identity then of course there is other things there are some exercises I want to give but maybe probably I should give tomorrow but there is no time there is one thing which is kind of important of course this I have to arrive to some other concepts I cannot say that dual unitary circuits correspond to some objects in quantum information theory but it is also kind of mysterious like unistokastic quantum channels so there is there is some question of how to classify those guys and what we can show is that this is mathematically identical so this type of dual unitary gates correspond to particular class of quantum channels in quantum info so this seems like a very special condition so how does it like is it robust to some deformations or question I think it is the main question that one has to answer I think I believe the answer is yes that it should be robust so it is and there is some partial evidence for that and I hope I have enough time tomorrow to tell you more about this but of course it is right as integrability also dual unitary is a fine tuned feature so it is not structurally robust I mean as you break it it is gone but I believe that it has much more chances to be kind of perturbatively stable because as you will see tomorrow we can now disclose the full ergodic hierarchy here so we can basically find examples of arbitrary fast degree of relaxation to equilibrium unlike in integrable systems or in systems close to integrable where they are this annoying pre-termalization phenomenon here you can engineer models so in one shot or in few iterations to equilibrium so correlations can decay super fast exponentially and I think that's a very good case for structural stability so it means that to my intuition also should provide some results on stability but we are not yet there so it's some work in progress I think it's a very interesting question Hi, just a technical clarification on the second chalkboard for contraction identity for the bullet should be on the other side of the the two legs, right? Yeah, yeah, sure, sure, sure Thank you Also like when working with this type of model is sort of the strategy to like construct a you that exhibits this property Is the strategy to sort of like construct a you that exhibits this property and then to sort of explore what happens I will tell you tomorrow we have a full parameterization of this dual unitaries for Q bits to find full characterization for Q dits for D larger than 2 is an open problem which is related to this problem on classifying quantum channels but we have families we have large families of examples for higher hybrid space dimensions you can even think of classical limits of dual unitaries which also are interesting I will say more about this tomorrow Thank you Any more questions? No, they are not always chaotic they are generically chaotic but they can be everything they can be even integrable they can be non-interacting integrable and there is a whole zoo I will discuss this zoo tomorrow but there is this what is usually referred to as ergodic hierarchy in dynamical systems theory where you can find examples of all these classes that mathematicians like as representative examples of dynamics Is it generalizable to plus one dimension and is it useful to do so? In plus one In two dimensions Two special dimensions in one temporal It is There are papers already out for proposing two plus one dimensional dual unitaries they call it ternary ternary how is it called ternary unitary because you have unitarity in three different ways and then people showed I mean there is a paper by Christian Mendel and collaborators from TU Munich where they discuss this I don't know if it was published in PRL or not but it was written like a letter style I read it on archive you can find it under this name but it's basically very analogous construction This is a follow up question so you said that it's not always chaotic right? So the point is the unitary lecture we saw that is there any fine tuning parameter is there by which we can do integrable to ergodic transition in this kind of models and in between the most chaotic region like he said that integrable to ergodic region transition will happen through some chaotic region it's not the direct transitions happening so is there that kind of feature one can see this you will see tomorrow I mean there is a very simple chaos to regularity ergodicity to non-ergodicity transition in this model you just play with some parameters of the gate and you will see that you'll get a transfer matrix whose gap suddenly becomes closed and then it becomes non-ergodic but this transfer matrix is a finite dimensional so everything is fully analytical and without any issues again you say it's a bit boring because it's fine tuned but it's at least one example we have where we can control the chaos to non-ergodicity transition Like no further public questions so you can continue discussing with Tomas and well yeah ok let's meet at 11.05 for the next lecture