 Vsih da sem vsošil početek na razgleda, zato sem vsošila da bi se zelo, o kaj sem zelo vsošil, zato sem početek je disdravil. Zato, da se bolj začeli, je to čest, da sem malo museli začeliti, da je to je pomembno, da se je početek početku za šim zelo, začel cu djelje, začel cu djelje, začel je, zelo se počutite z zi, z sz, da taj se vse spina, kaj d je taj 2j plus 1, tako spina, taj se reprezentuje, taj se spina s u2, kaj je taj 2j plus 1, a taj so tako s u d, the single qubit gates that's all surviving. So this again, it's the same complex, I mean the same exercise show now, this is dual unitary. Of course it's by far not complete. No dual unitary D dimensional system can be written like this, but it's an interesting family which is inspired by dual interest for qubits. It's like a representation of a high dimensional equation. vsega je zelo, da je vsega štoči v klasike limit. Nastavimo, da se ne vsega vsega vsega. Zato da se ne vsega vsega je zjavil, ta časja, da je delasya, in tkaj je, da se poživimo, kako tega vsega je zelo, tkaj je vsega štoči vsega vsega. Kaj je u i maj, tko je tkaj vsega vsega h, tkaj je s x, tkaj je, tako, da se ne vsega vsega vsega vsega, tkaj je, tkaj je vsega vsega vsega vsega, model s swapom. S swapom je zelo vsočen, in je ne zelo in trakčen, je zelo vsočen, je zelo vsočen, pa vse je tudi klasikl, ne, ne, klasikl, je semiklasikl, če vsočen model s transferstvim. In zelo vsočen je, da je to zelo vsočen na klasiklu limit. Zelo vsočen je vsočen na klasiklu limit, a zelo vsočen kaotic klasiklu dynamics klasiklu spins. Modelj, da klasiklu mapi poklačaban pojstroňje, klasiklu spins pojstroňje, kaj je zelo Vietnamese, je symplektik, vsočen je sample structure, sample bracketi, Zelo vsočen je nek Reno Diamond, zelo vsočen mas vsočen transferstvim dynamic, vsočen je klasiklu automat. Evo je l Nahov塊 parameter takčen za klasiklu buddies noted sequč snStuden in vsočen na klasiklu limit. klasikl markovčenje. M plus in M minus now becomes infinity dimensional, klasikl limit become infinity dimensional classical stochastic matrices and their spectra determine the care of correlations. So that program has been actually done. I mean this was a work done together with students in sergi. Alexios Christopoulos and Andrei Adaluka and Dimakov Rizhin. I think it's a nice curiosity. I don't know if it will turn out to be how much development will be stimulating but it's a curiosity that you can also do duolinitries for classical classical klasikl klasikl map lattices. Ok, now let's go back to our discussion of spectral correlations. So, now once we define the spectral form factor, let's discuss its kind of essential scales. So yeah, now you see the reason why I decided to put this strange normalization factor here so that we get no normalization here. So this is, sometimes people define it differently but ok for us this will be the, I mean you want, well, yeah, eigenvalues in the minus here, put minus here and plus there, but it doesn't matter, it's the same. So it's a model square of trace of u to the t. So now there are two timescales, there are timescales which are related, I mean, well, there is basically a single timescale which is completely universal and that's related to the density of states, right? The density of states is basically number of levels. So there is the spacing between adjacent levels which goes like 1 over n, right? It goes like 2 pi over n. So now this defines a timescale, meaning that there is a time such that this will become of order 2 pi and this timescale is called Heisenberg time. So now, right, because what happens with the dynamics, I mean, you see, I mean, maybe there is another, maybe it's instructive to write another and multiply these two terms. Then it is n, n prime e to the i phi n minus phi n prime times t. So it's like, you see, it's like sum of quasi random terms, right, which become really quasi random after time t which is larger than mean level spacing, right? Larger than inverse mean level spacing. There is a minimal spacing which is of the order of 1 over n. If time becomes larger than n, then this becomes multiple of 2 pi. I mean, then even the smaller spacing generates kind of the coherence or the phasing relations. I mean, all these terms are now completely kind of defaced and they can be considered as quasi random. So what I want to say is that there is an absolute time, Heisenberg time, after which spectral form factor becomes completely structuralism. We cannot have any structure anymore. So for any kind of ensemble of random matrices or any class of dynamical systems for times longer than Heisenberg time there is nothing really interesting anymore. There is just quantum recurrences and nothing else. OK, now let's see how does, let's how does spectral form factor look like for typical classes of universal classes of random matrices. So spectral form factor in random matrix theory. So there is the Heisenberg time and Heisenberg time is of the order n, as you see. And then there is a scale n also associated to spectral form factor. And so what can happen is so there is a special value n and it turns out that spectral form factor basically cannot be substantially larger than n, but it can approach n. So, and then there are different types of random matrices. The so called Dyson's threefold way. There are three fundamental classes of random matrices which are unitary. This is related to standard Gaussian random matrix ensembles but I am using unitary rather than Hermitian matrices. So these are called Dyson's circular ensembles but they are completely analogous to standard Gaussian ensembles. So there is a unitary ensemble which is called CUE which is just a unitary group. It's equivalent to unitary group with a hard measure. So the idea is if you don't know anything about your evolution operator you replace it with a unitary group. This is the kind of variant of random matrix ansatz for unitary matrices but usually what people do is they replace a Hamiltonian by a random Hermitian matrix. Now we replace a unitary by a random unitary matrix and see what happens, right? And then we can compute people have computed spectral form factor for unitary matrices so basically you average over unitary group there is a well defined integral over the unitary group with a hard measure which you can evaluate trace u and model of square trace u to the t model of square and this can be computed I mean there are tricks in random matrix theory which allow you to compute this actually there are some very nice tricks but I have no time to talk about them so it's a very nice function it is a linear function of n and then it plateaus when so basically it's a stepwise linear function it goes like linearly to n and then it's so these are and usually what you have to do is you have to use this I mean there is this standard wisdom when you use each of the random matrix ensembles you use unitary ensemble for systems where there is no time reversal symmetry so called anti unitary symmetry so when there is no time reversal in your chaotic model then the common wisdom is to use unitary ensemble either CUE or in the Hamiltonian case GUE and by the way I mean the four-spectre form factor is the same for circular and Gaussian matrices because we have to do thermodynamic limit I mean this one actually is the same even for finite and well I have to be careful for Gaussian ensembles time is continuous so there are some corrections but in thermodynamic limit is the same now when you have time reversal or other unitary anti unitary symmetries you use orthogonal orthogonal circle or orthogonal ensemble which is formed of matrices V which can be written as U times U transpose where U is sweeping over unitary group with respect to the hard measure so this ensemble is not a group anymore I mean these are kind of I mean the first ensemble was really a group this one is not so these are complex symmetric matrices you see the way that's written this means that V is selfish symmetric but it's complex so it's not like in Gaussian Hermitian ensembles in GOE where this is a real Hamiltonian now we cannot write it as a real sometimes people make stupid assumption that this is just orthogonal matrices these are not orthogonal matrices these are complex matrices which are symmetric so we don't want to to see this without time reversal and then for that people can compute spectral form factor which is well now we have again you can just do integral of a unitary group but you have to compute U transpose to power T model square and then you get again analytic result 1 plus 2T over N I mean it's a funny object for T less than N N to N minus T log 2T plus N divided by 2T minus N or T larger or equal N now we don't have to remember that but what is important is that this starts with a slope 2 so for small time it starts with a slope 2 and then it goes with twice the slope of the unitary and then it goes like this smoothly to the plateau so this is GOE and then there is another ensemble which is the symplectic matrices which I will skip for my discussion but complex circular symplectic ensemble which has slope 1.5 and then it has a spike but I will probably I should not even discuss it because it's not be important for the rest of our discussion just to let you know that there is a third ensemble which corresponds to spin 1.5 systems with odd total spin so with half integer total spin and time reversal which has kramar degeneracy so time reversal which squares to minus 1 but as I say I will not say anything else and then the third ensemble which I have to discuss is the so called Poisson ensemble which are not really random matrices or if you want they are random matrices but they are random matrices which have Poisson that is a random IID which is 0 and 2 pi for that I give you an exercise which is instructive so this is really easy exercise but it's still instructive to compute spectral foam factor it can be done you can imagine these are IID complex numbers so you can average out everything and what you get is spectral foam factor for this ensemble is just flat so for Poisson ensemble we have this and this is usually related to integrable systems so there is a conjecture there are two conjectures two main conjectures one is called Bohigas conjecture or I prefer to call it quantum chaos conjecture there are other people before Bohigas who understood it quite well so Bohigas conjecture says that when you have a chaotic system chaotic dynamical system you have to have random matrix spectral correlations spectrum now it goes one way so I think it goes both ways but it's identified I mean when you have chaotic dynamical systems but you have to have a proper definition of chaos which is here meant where a system has a classical limit and classical dynamics is chaotic then you expect random matrix spectral correlations this conjecture is not proven there are some heuristic ideas which I will briefly discuss in the following so for a physicist's mechanism is known but for a mathematician's there is still proof so it's a very deep conjecture and it asociate chaos with random matrix spectral correlations and this is what I am going to take also as a definition of quantum chaos for interacting spin chains or many body systems alike and then there is another conjecture which is it is well known but it is less well known as a classic conjecture it's called Derrit-Abor conjecture which says that when you have integrability integrable dynamics you have Poisson spectral correlations so this conjecture was formulated the way Bohigas particle was 1984 and this other conjecture was formulated even before in 1977 by famous Barry and Tower and they had very nice geometric picture behind this for systems which have few degrees of freedom and where you can write eigenvalues in terms of quantum numbers because when you have integrability then you can write at least in a semi-classical limit you can write eigenvalues in terms of quantum numbers in terms of the so-called TORUS quantization so you can write Ljubel-Arnold-Toraj you can find Ljubel-Arnold-Toraj and you quantize them you find that each eigenvalue is associated to particular TORUS where the actions are quantized like even old Niels-Wort or TOTAS I mean that one has to look at the actions around certain loops and the actions have to be multiples of plant constants so there is a very nice picture of Barry and Tower which showed us in generic case how this connects to Poisson statistics so these are two classic conjectures which we will use as a working definition of chaos and regularity but as I say my motivating question for the rest of my lecture is to show you some idea how one can some possible mechanism in random matrix vector correlations in interacting many body systems like dual-unitary circuits so now let me tell you now since I went a little bit to an old story of semi-classical quantum chaos I will just continue for another 5-10 minutes there to tell you a little bit more on this history so that could be that goes in the subsection under title semi-classical semi-classical chaos periodic orbit theory so this was like the main buzzword in the conferences and workshops which I attended as a PG student and young faculty in the end of 90s mid 90s so this was all about periodic orbit theory then somehow we got close down and now probably most of people don't know what it is but I think it's still worth reminding ourselves from time to time because it's been really a nice program but the whole idea of this program was actually inspired by Berry 85 I will just now give you a two-line two-line calculation of spectral form factor for chaotic systems from Berry I mean which will kind of explain at least the main features of this random chaos conjecture namely the linear ramp of spectral form factor so as you will see there is I mean this as I already tried to explain before I mean there are different time scales and spectral form factor covers different regimes looking at short times means long quasi energy and this is something like what people call spectral rigidity if you look at spectra across large distances you see that the fact that you have this ramp of spectral form factor meaning that spectral form factor is much smaller than for integrable dynamics for independent levels I mean this comes from assuming that levels are independent random numbers but in reality spectral form factor is many body systems exponentially smaller because this gap between these two values is exponentially large is 2 to dl so that's something which is really remarkable and that says to do with what people call spectral compressibility or long range spectral correlations in random matrix spectra but then you go to the very end close to the Heisenberg time you still have what is called correlation hole so there is still a gap between what you would get for random levels and this is related to level repulsion you have probably seen what people like to draw in numerical investigations is the distribution of level spacings when you take two adjacent quasi energies and you look at distribution of level spacings and for Poisson independent spectrum you have Poisson statistics that is level spacing is exponential but for chaotic spectra which which is associated to random matrix ensembles you have the so-called Dignar Dyson or a distribution which has a gap I mean which has probability that vanishes for small spacings that's referred to as level repulsion so the level repulsion can be mapped can be identified with spectral form factor behavioral spectral form factor at times which are close to Heisenberg time still I mean there is a gap because the spectral form factor is smaller than Poisson which is manifestation of a level repulsion and now the very offered already many many years ago that he offered a very clever and a cute argument why you should have a ramp of spectra why should you have a linear ramp of spectral form factor and this is something that I think it needs to be reminded because it's so cute so now elaborate on this argument a version of this argument for floke system so usually it's given for sorry for of course you have also a linear ramp there is just different slope but these slopes can be explained as well in terms of simple argument due to barrier but let's ignore the simple simplicity so I will now write the version of the barrier's argument for floke systems so the idea is to write trace of u to the t so now the main the main question in this game is to be able to write in one way or another some analytical analytical formula for the trace of u to the t trace of u to these as I already maybe well I'm not sure if I have already explained sufficiently well but it's like it's even simpler than correlation function correlation function was like having two observables this is very clean I mean no observable so this is like really a partition sum of a classical vertex model but with complex weights with very simple boundary conditions it's just a trace it's periodic boundary conditions you have which means you have u to the t but then you take a trace which means you identify these qubits this is one, two, three, four but when you take a trace you have to do the periodic boundary conditions now if you take periodic conditions in space then basically this is like a partition sum on a two-dimensional discrete torus periodic in time, periodic in space so now the question is can you do something like this I mean now I'm sorry I'm now jumping back and forth between semi-classical and quantum anybody now let's go to semi-classical forget about spin change for the moment just for the moment and see what Betty taught us and then we'll try to find some parallel to that so now what you can do I mean just following textbook of Feynman and Hipps for example if you read a beautiful textbook of Feynman on path integrals you know that this is just a propagator trace of propagator to power t so this can be written in terms of a path integral which is sum of all classical trajectories which are closed which are periodic which are periodic time t t iterations of a floke dynamics and here it is a classical action divided by hbar so that's that's a nice formula but now you can do saddle point approximation or stationary phase approximation to this formula now when hbar is small you can do saddle point of stationary phase which means you look for all trajectories in which this phase the action is stationary and these are just classical trajectories this is a classical variational principle which give us classical dynamics but now the condition is periodicity so these are periodic trajectories so basically you can write this in terms of periodic trajectories of length periodic trajectories of length so sum of all periodic trajectories of length t sometimes there are countable sums then the question is what does this sum mean and it turns out that for some nice chaotic systems this sum is countable so that you can actually organize these trajectories into a meaningful sum and then whatever is left here from this path integral is just a sum which has individual terms and each term has an amplitude and a phase which means just the classical action evaluated at the value of the saddle point or stationary point and the prefactor is some Jacobian something related to Leaponov exponent of the periodic trajectory so it's the inverse basically it's the inverse Leaponov exponent so it's like inverse Jacobian which means that the more chaotic trajectory the less it contributes in a way that it makes it convergent or at least somehow not really convergent but conditionally convergent for a nice chaotic dynamics and then whatever we have to do now remember what we have to do now is we have to compute trace of u to the t model square which means you have to just come up with product of two sums and then what Barry suggested is that this double sum is a horrible double sum but it basically you can use a random a kind of assumption of how we call this random phase approximation that all the terms for which these two because these are large numbers h bar is small so these are large numbers and these large numbers will give you many multiples of 2 pi so it will give us a kind of pseudo random contributions which we cancel out unless these two actions are exactly the same so that is the so-called diagonal approximation in the diagonal approximation so what we want is sp is equal to sp prime not p is equal to p prime but we want the actions of in these terms that action of the trajectories are the same and now you see why time reversal becomes important because when you have trajectories then the systems which have time reversal symmetry the priority trajectory which is kind of closed in phase space has another trajectory which is just its time reversed partner so it can be paired with itself or it's time reversed partner simple sum of ap square plus a factor of 2 which happens in case of time reversal in case of time reversal invariance so that's a clean argument why you should I mean how you could explain just from dynamics that the ramp has a slope 2 for orthogonal for systems with time reversal compared to systems for generic systems it's just a factor of 2 there is this sum of jacobian squared inverse jacobian model squared and that is actually another kind of interesting observation which is can be proven it's a rigorous mathematical theorem that this is just equal to t the kind of sum rule and it is known as hanae osorio del meida sum rule it's a well-known thing in classical kometon and chaos theory and then once you have this then this is just t or 2t systems which don't have time reversal so that was a various argument and people have then struggled to do more I will now skip the rest for this semi-classical game then there was a tour de force of effort of many groups which culminated in 2004 with the kind of derivation of not proof but derivation of random matrix factor form factor from periodic theory was Fritz Hake in the company he was the leader of a group in Essen in many collaborators they were able to figure out the whole combinatorics of this it turns out that you have to then pair not only these orbits but orbits which are now related more remotely to obtain higher order terms this was just a reading order term which was good for short times but then to get all the terms of random matrix you had to do much more but it turns out it's possible this is now understood now let's go to many body physics this was just a warm up but now let's say what we can say for a many body system it's a it's a sum rule I cannot I think you have to assume that your model is hyperbolic or uniformly hyperbolic so all projectors have to be unstable so it has to be chaotic so I don't think I can give you intuitive explanation for harmonic oscillator I don't know because for harmonic oscillator ok so let's say for harmonic oscillator what do you mean first of all I mean first of all I assume that we have floky dynamics otherwise I would have to slightly different but for floky dynamics ok you can think of what harmonic oscillator I mean you know harmonic oscillator has very strange periodic trajectories first of all it has just one periodic trajectory I mean it's in other words all periodic trajectories have the same period I mean it's a monochromatic system it's strange I mean harmonic oscillator is always bad when you do chaos they are always strange I didn't get you can you speak up? yeah even if it's oh sorry as harmonic we can consider some one dimensional motion I mean it resolves this problem monochromatic city but then we can just semi-clatically quantize all trajectories understand more or less and probably this calculation will lead to I don't know as far as I remember it's crucial to assume that the trajectories are unstable so lead chaos and then one dimension would not work yeah yeah ok but you know it's ok ok it's a really side track for us so it's just I can give you the reference if you want to check maybe I can just write it down so these are two postdocs students of Michael Berry ok now ok let's go back to spin chains but before going to chaotic spin chains I don't know how am I doing with time ok so probably have to speed up a bit but still I wanted to give you at least I will give you another exercise there are a couple of things which can compute very easily instead are useful to build intuition so spectral form factor in spin chains so now I will just take an example we start with an example, non-interacting independent quantum spins I mean I will assume also for most of the rest of my lectures that d is equal to 2 so what you can call this spin on half chain if you want a qubit chain and I will first consider an example of non-interacting independent quantum spins on half ok if you want localization you can also call this L bits so suppose the system is non-ergodic suppose it would be it would be many body localized then you can reduce it to independent spins and the question is can you then compute spectral form factor for such a model so I mean I just tried to argue that it could be more general than just right away having independent spins but as such then I will assume that my my gate can be written as a my time evolution now can be written as a product as a product of L single qubit gates where each single qubit gate is just let's say for simplicity let's say transverse field so it has a transverse field h so it just rotates qubit just rotates around the z axis with coupling to a magnetic field of strength h and now let's just assume that these are random fields so let's assume that we have no interaction but what we can have is still a general kind of rotation but it should be different for every side so this corresponds basically to Hamiltonian this corresponds to Hamiltonian dynamics where Hamiltonian is because these are independent so we can just add it up because they are commuting so we can write it in terms of the minus h when this h is sum over j from 1 to L hj sigma and now a question can we compute spectral form factor for that and now we have a very nice handle on averaging I explained to you already that we don't expect the form factor to be self averaging now what can we do here we can average over random or whatever random fields so we can define spectral form factor as an average over this ensemble of independent spins where I take expectation value with respect to vector of h and now there is a calculation I wanted to do on board but I invite you to do it yourself because I want to move a bit faster yeah it's still I'm a bit unhappy I'm not doing it but as you see I mean it's just an exercise of manipulating tensor products and traces of tensor products so one thing you can do immediately you can write this as expectation value of trace of u to the t tensor product u to the minus t and then you separate expectation value on each independent spin separately and then what you get at the end of the day is expectation value of independent spins and then you take a trace of each independent spins which is minus 2 i ht and the rest of the details I invite you to do it yourself so I'm just writing now this is the last matrix which corresponds to the last tensor product of 2 by 2 matrices and then everything is to power l of course but now you do expectation value now these are random spins sorry these are random fields and as soon as these fields are flat in 0 to pi so this cancels and then result is 2 to the l result is flat so it's the same as for Poisson so it's just to show that independent random spins correspond to what you expect for regular integral systems now let's go to something much more interesting local interacting now I will for the sake of concreteness and in the view of the lack of time I mean I wanted maybe to show you also the general statement for general dual unitary but this I have to skip so I will close my lectures by just outlining a calculation for a specific class of dual unitary models which are like more intuitive than the most general dual unitary circuits so I will now discuss SFF locally interacting kigtizink model so this is also how we came up with this I mean this was like the first result which we had for spectrophone factor but I think still for the pedagogical purposes probably the easiest to explain so or at least it has maybe the most physical kind of interpretation in terms of something which is discussed a lot throughout the literature so this is like related to transfers field easing model which is a popular model on its own so now what we will do is we will take like a transfers field easing model but we will or two field easing model transfers and longitudinal fields but we will take a kigt, we will kick it with transfers field so what I mean now is take a time dependent Hamiltonian a vector of longitudinal fields and now I will assume I will use the same notation I will take a vector of longitudinal fields and this is a time dependent Hamiltonian it has an easing part which depends on longitudinal fields vector plus a kick and this kick means that this piece of Hamiltonian is switched on and off in a past way with delta, train of delta let's say, I assumed at the time the period of the drive is equal to 1 and then the easing part is just classical easing model with external field if you want so there is a coupling j so coupling easing coupling j and then longitudinal field hj which is potentially different for any site periodic boundary condition and then I have a kick which depends on a single parameter which is called B the transfers field strength so now of course I will immediately phrase it as a floke system and I will write a floke circuit for that I just wanted to start with the Hamiltonian so could also think of how to realize it maybe with some past so now the floke operator corresponding to kick easing model would label now kick easing which explicitly depends on external fields so this is also something that now changes through the spectrum previous lectures previously I on purpose started only non-disorder system so I said, ok, we have a fixed gate let's make it translation, invariant in space also in time now I will have to mess up with this order a little bit because of non-self non-self averageness of the spectrum form factor but I will switch off the disorder at the end of calculation but it's important to have it and it's important to average over it and the crucial step of this calculation is to be able to compute averages over this order and there will be a trick which crucially depends on the fact that I have dual unitary circuit so dual unitary buys me a lot because I can do this order averaging explicitly and sometimes analytically so this is like in a nutshell the rest of my lecture I will try to explain to you in a brief in a simple example how to do this order averaging how to formulate doing this order averaging as a clean mathematical problem and then you have to still solve this problem and it's not so simple so I will skip most of the steps but at least I will connect it to a very intuitive to a very simple problem let's say for which you can easily see intuitively what should be the solution but as I say, the previous lectures were kind of very fundamental the other steps were kind of super simple and nothing was mysterious I mean this last part still will be hiding a lot of pages of proofs so I will warn you I mean I will not be able to do all the details here ok, so now what is the what is the floke operator you can write it in terms of two pieces it's kicked and this is easing this one depends on the field you make time of the product you have time dependent Hamiltonian it's a standard textbook quantum dynamics you write time of the product but now just two terms, non commuting terms of course these two terms don't commute so you have to write these two pieces like this ok, and then what we will want to do is to compute spectrofoam factor as expectation value with respect to field trace of easing to power t ok so now it turns out that this easing model now there is this standard analogy between quantum dynamics in 1D and statistical physics in 2D which we can now make completely explicit here so what we can do now is we can write this trace I mean the key thing is to write this trace of the propagator as a partition sum mentioned a couple of times but now here we will be able to do it explicitly in terms of basically 2D easing model this is a 1D kicked easing model and if you try to write out this partition sum you see that it really looks like a 2D easing 2D easing model so basically what we will do now we will take kind of path integral again this is like following basically the idea here you write this as a path integral but now it's a discrete time path integral so you have to slice t times the complete set in between this propagator and then what you get is basically sum over tau t-spin configurations which I will call as tau so each tau correspond to tau goes from 1 to t and then you sum over and this underline means that these are this is a s tau 1 s tau 2 and so on to s tau l so each vector s tau is a vector configuration of spins in 1D but it depends on the index tau which is also going across the other direction which has tau values I mean I will have to speed up a bit so I will not write all the details of this derivation but again it's something that is forward so the missing details I invite you to to try to do it yourself as an exercise so so once you do this you see that you get sum over all spin configurations let's call s s tau j so it's like a partition sum over configurations of 2D classical spin spin lattice and then we have a product over tau over j goes from 1 to l so we see this a product over j because this is diagonal in the computational basis so you can write it immediately as a product over pairwise terms and then there is a kicked term but kicked term again is a product because it's a sum of independent terms but this is not diagonal so let me just write it so the matrix elements of this guy let's call them v tau j s tau plus 1 j so this couples between s tau plus 1 and s tau so s tau plus 1 and s tau but the same j because it's a tens of product so these are just matrix elements of each of these terms s s prime is just s e to the minus i b sigma x s prime that's just a definition and then there's this other part which I said is also a product j s tau j s tau j plus 1 plus h j s tau j and now putting everything together basically we can write it really as a partition sum of an easing model so skipping a little bit of details here which are just straight forward I can write this really as a simple partition sum which is again a complex weight is l times t over 2 sum over all classical configurations s tau j e to the minus i and then here it's an energy functional of this spin configuration vector of magnetic fields and this energy functional is sum over two directions tau 1 to 1 to t it's periodic boundary conditions in both ways and this is j s tau j s tau j plus 1 plus j bar s tau j s tau plus 1 j, see this is anisotropic to the easing model it has two different interaction constants horizontal direction one in vertical direction time is vertical and then there is also a field which only depends on position but not on time so only on one direction so it's like a classical to easing model in striped fields so fields are random but they are modulated only in one of the directions they are flat in one and random on the other but it's all classical the only caveat is that it's classical but the weights are not positive otherwise it'll be just classical stuff and now what is this and what is this j tilde you can figure out this j tilde from this representation it is minus pi over 4 minus i over 2 logarithm of tangent of b so it's just parameter b now hidden somehow in the exponential okay now we have to stare at this expression now we think okay we are doomed because this is not integrable people in geniuses like Baxter, they taught us how to solve easing model or even onsager before and they claim that this type of easing models are surely not integrable so we forget about it how what can we do if there was no external field this would be integrable so we could use onsager solution or one of the beautiful Baxter solutions this is not integrable so we have to do something else but it's not hopeless what should we do so now I will use some diagrams which are like a version of tens on networks again which will explain what we can do so one thing we have to recognize stare at this formula is that we can easily switch between space and time I mean the structure of this formula is such that it is like an easing model in both directions so if we switch between space and time we just have to replace j by j tilde and j tilde by j and this is the same structure so basically what we have let's just write this partition sum what we have is now really a nice cartesian grid this is x and this is t this is magnetic fields h1, h2 up to hl then there are periodic boundary conditions so now what we have done is let's call this as a partition sum of a two-dimensional classical-like easing model but we started from basically transform matrix I mean this was like in the beginning this was just a 2 to the l by 2 to the l matrix powered t times and doing the trace so this is in terminology of start mech this is called transform matrix so basically we evaluated this partition sum in terms of a transform matrix now I probably should use some colors so I completed a picture but let's say this was a transform matrix which I called uki and I iterated it iterated it t times and then I did the trace now what I can do I can take instead this matrix and this matrix I will call u tilde ki and this pans explicitly on hj and I will fill it at this particular wire but otherwise it has exactly the same algebraic structure because the problem is symmetric under exchange of space and time you just have to change parameters this is the same but just I have to change parameters j for j tilde and vice versa so basically what we have to do we have to take parameter j tilde which is a function of j and b and b tilde of j and b so this guy is a function of j and b and this guy is a function of j tilde and d tilde so there is a simple transformation which I can spell it out for you I have not written it down explicitly but there is a simple transformation which is explicit which allows us to form column transform matrix out of the row transform matrix so the other difference is a matrix which acts on spin ring of t-sides so this is a 2 to the t by 2 to the t matrix this was 2 to the l by 2 to the l and this is 2 to the t by 2 to the t so sometimes it is better to contract the other sometimes one, sometimes the other depending if t is short and l is long it is much better maybe to contract along columns right it is easier to power a smaller matrix because you can diagonalize it you can figure out its spectral composition and then things become easy where is u tilde? u tilde is this column yes and u is the row dependence on j and b j tilde and b tilde b tilde appears because you have to where was the no I mean b tilde appears because you have to now write j as you have to write now j in terms of b tilde so basically you have to write j ok maybe this should be right in the transformation right these two equations should give you this transformation right this is implicitly given this transformation right so this is called spacetime duality I mean our spacetime flip this equivalence between the two the propagator in these two directions this was the first time observed in the article by Boris Gutkin Thomas Gurren company spacetime duality now but why is this useful I mean then the question is it really sounds like a super cool property but in fact this was what preceded dual unitarity dual unitarity is basically generalizing this feature not completely because I have not yet unitarity but now what I will insist now I will insist that this guy is unitary it has the same algebraic form but now I will ask him to be unitary right so I will when is it unitary it's unitary when this matrix when this parameter j tilde and b tilde are real so u is unitary if and only if j tilde and b tilde are real otherwise it's not unfortunately unitary for that I have to sketch another thing so for that I have to sketch the key trick I mean I think this will be the main part of this explanation to show you how to do the averaging how to do the averaging of this object but you see this object is a two replica object it has two traces so I have to now unfortunately erase a little bit because I want to make another picture slightly below I will take a complete identical copy of this picture which represents now this represents trace of u to the t and this other picture represents trace of u to the minus t so it's just complex conjugate again this is x this is 1, 2 to l 1, 2 to l this is again time and now the product of these two pictures again these are two independent tensor diagrams tensor network diagrams so if since the things decoupled into product there is no coupling seemingly no coupling but there will be coupling because there will be averaging we have to do the disorder averaging and this disorder averaging will produce coupling but let's see how first of all how we would do disorder averaging if we would contract along the rows as we wanted at first this would be impossible because disorder is hidden in the spatial modulation of the field but now what we do rather is to contract along columns where for each column field is different and in the subsequent column is completely independent so what we can do we can average over the field independently because field is IIT so if you contract this diagram column wise we can basically average locally so then it becomes like a quantum channel like a noisy quantum map where the field can be averaged over in each instance of the map because space becomes the new time then you can average over disorder because disorder is IID in space so that's the key the trick in the nutshell so what we will do now we basically take all these things together and we baptize it as a transform matrix so that's how it goes basically to replica calculation disorder averaging is done by spacetime flip then you can use the fact that disorder is IID then we are in business ok ok in the rest is technicality so I will have to I think I will wrap up in 15 minutes for sure but let me now just ok so how does it go now let me do now this calculation now this is a three line calculation which I will do for you just to make sure we all follow this argument now this is I will just put a star here this is the conjugate of the matrix of the many body matrix this is still a vector of the fields right now I will do the duality so then there is this duality formula says that I can write trace of u kicked easing to power t as a product as a trace of a product of j going from 1 to l u tilde to hj I mean instead of doing this sweep where I take the same propagate doing this sweep where this is the same at all times but it has modulated field I do this sweep where I have product of matrices with different local fields but constant in the new space which is the old time so this is a constant field easing models but they have different value ok great so now we are doing that and then I use again the fundamental feature that trace can be factored so basically I can write now think as a trace of this guy tensor this guy has to be a complex conjugate right and now I can now I can plug this averaging inside because now I can do averaging independently on each factor so basically this is trace of the product to l and then I have expectation value of hj now this becomes I mean I can forget about j here because it is the local field which is constant it is homogeneous in the new space and it is I do it independent averaging so I can forget about j so basically I can also forget about the product here put here to power l and now I baptize this as I say as a transfer matrix ok so now I have basically reduced my problem of computing spectral form factor as a trace as a again as a kind of partition sum of two replica model but I mean it is like a transfer matrix to some power some transfer matrix now sometimes this transfer matrix acts on tens of product of two spin chains so this guy in the morphism of c2 to the t cross c2 to the t so it is I mean it basically it acts on two spin rings of t sides one, two to t and another one as well why do I need two spin rings well these are the two spin rings one and the other but they are coupled because of the averaging but this coupling is very nice I mean now I I will try to do these steps two steps to really show explicitly how this matrix this transfer matrix looks like but I feel tempted to go to the computer and then I will probably lost you also so better not to go into all the detail so right so now so let's see I mean what we want to do now let's assume that the field is Gaussian assume hj Gaussian with field with mean h bar variance sigma square so I will assume that this expectation value is evaluated as a Gaussian integral any function of h I will compute as an integral minus infinity to infinity Gaussian kernel h bar divided by sigma square over square root of 2 pi so now I want to compute what do I want to compute and then I also do I define you kick teasing of h which now depends on the local field kick teasing so that's the dual kick teasing I mean that spatial kick teasing but you see I mean I assumed that probably I went too fast there but I will assume and I have assumed that my utility is unitary and unitaryity of utility means that I mean I require that j j tilde and b tilde are real but the fact that requiring j tilde and b tilde real implies that they have to be equal that j equal to b that all these parameters have to be equal in modulus and this is equal to pi over 4 so they are all either plus or minus pi over 4 now there are many cases four different cases I will not discuss all them separately but if you just stare at these if you see that the only solution which makes j and b j tilde and b tilde simultaneously really is the one where you start from pi over 4 pi over 4 plus minus and then also j tilde and b tilde pi over 4 so that's the value and this is the so called self dual point of the kick teasing model which corresponds to dual unitary circuit so in this case this is a special instance of a dual unitary circuit I decided to be just more specific but I could formulate these things also very generally if I wanted ok so now one useful identity is to isolate the z field let's call mz let's call m with m I designate the components of monetization so total z so now this is in time so that summation index is tau or 1 to t so I can always write like this I remember how kick teasing was formulated he was here so this was h-easing was diagonal and it has only commuting terms so you could separate one body term from two body terms and one body term is just this diagonal ok and then let's simplify the calculation of the transfer matrix now it's a Gaussian integral plus and what is nice now this is just algebra of manipulating tensor products so what I can do now I can write this as this tensor product this times this tensor product this but then the first factor is independent of h I can put it out of the average so it's a unitary matrix which is just a transverse field easing model there is no longitudinal fields here so this is unitary matrix which is by the way also free-forminizable because it's just transverse field easing and the rest is just a Gaussian integral of single body terms a Gaussian average of think but then computing this you get it's a Gaussian integral it's an exercise so I will just spit out the result it's again a Gaussian now I have to speed up a bit so I will try to flash just the main things so what I get now is some super operator you have seen super operators during this week already so you can think of this as a vectorization of a super operator this is like a taking commutator with respect to mz so it's just this super operator squared and back so it's like a linear quadratic form commutator with mz so I can basically pull up so what I will do at the end I will put it I mean these two terms commute so I can just put this guy back inside here so at the end of the day I have a kind of general expression for the transverse matrix which is you kick easing and now I go back and then times something which I called contractive map which depends on the variance only and this is like a Gaussian of a commutator and remember to just remind you what we are doing now let me if we have already evaluated the spectral form factor as a trace of this transform matrix to power L now for this specific dual unitary case of kick easing model this piece is unitary and this is kind of contractive or better to say non-expansive so it has spectrum this is just exponential of a commutator commutator is a kind of Hermitian super operator so it has a spectrum which is real and it is between 0 and 1 and it has eigenvalue 1 as well it has eigenvalue 1 corresponding to all vectorized operators which commute with mz so if you apply this to a vectorized operator which commutes with mz then eigenvalue is 1 because this is 0 and then e to the 0 is 1 so it has a non-zero kernel I mean non-trivial kernel or if you want non-trivial eigenvalue 1 eigenspace that is of course very important so the point is now because you see I mean we want to do the thermodynamic limit now so what we want is basically to show that there is a kernel or eigenspace of eigenvalue 1 that becomes the dimension of eigenspace a number of eigenvectors of eigenvalue 1 the second thing to show is that there is so basically what we will show what we will not show but what we can show I will just describe in 5, in 2, 3 minutes what we can do with a lot of hard work but nothing really I mean the main kind of progress is to be able to write just really brutal mathematics so what one can do is one can show that first of all eigenvalue 1 is degenerate and it has multiplicity which is e or 2t now in our case it's 2t for dual unitary circuits which wouldn't have time reversal symmetry to be t so it is exactly well in this case as I say it's 2t but then the second thing to show is that there is a positive gap with respect to the rest of the spectrum so that there is no other eigenvalue on unit circle other than 1 so we have this situation which is well it's not mixing because the eigenvalue 1 is is multiblet degenerate that's why we have result which is not 1 but this is related to t and this is related of course this t now what we are doing basically we are looking for the eigenvector and these eigenvectors the fact that these eigenvectors are t-fold degenerate is related to time translation in variance symmetry so basically now time is the new space so we have basically a problem which is translation variant in space which was the time the number of sides is t so we have translation variance in this space and now we have basically eigenvector in each symmetry sector we have the translation variant the composition of this problem in symmetry sector we can show we find a unique eigenvector and that corresponds to spectrum from factor being t I think I will stop here I will not go into any formulas I wanted to flash through a couple of lemas but I think it doesn't make any sense I mean the key is in this kind of words that basically it's all about translation variance in time of course this is the hard part of the proof is to show that in each symmetry sector the eigenvalue is unique there is no other eigenvector and this is the hard part so this basically is algebra is to basically show that the presentations of the some algebra of some operators some operator is irreducible ok so I think I've said more or less everything I wanted to say I'm hoping to questions so I'm still here for a couple of hours so let's discuss more thank you so much for the beautiful lectures questions can you classify the dual unitary so the canonical representation when you are writing so the jx, jy part you take to be pi by 4 so when you when you start classifying the dual unitary in the previous lecture and you write u equals to there is a jx, jy sitting and you get it to be pi by 4 and here the self dual limit is also pi by 4 so is there any yeah it's related of course I mean this pi over 4 is kind of really halfway between two trivial cases I mean these models are very similar to these models that people discussed in the business of time crystals where they have this pi I don't know how they call it pi time crystal phase where one of these parameters will be pi over 2 it's the most yeah maybe since you ask I think it would be instructive to write a phase diagram of this kichtising model there is two parameters j and b which are crucial and from 0 to and basically you can limit this parameter space to square between 0 and pi over 2 and when two parameters are equal this is like a critical line why critical let's assume first that there is no longitudinal field so then this becomes a free fermion model you can write a dispersion relation for quasi particles and in this line it is gapless I mean it's critical so this model is critical in this line but then it turns out that if it's pi over 4 I mean there is another line where you can also argue that the same and the intersection of these two lines is this pi over 4, pi over 4 which is the coolest point of this parameter space which is both critical and chaotic so as I also like to call it critical chaos and since you asked I mean there are many things I forgot to say but one thing which is absolutely important I mean after this calculation we've shown the spectrum form factor is 2T now we did thermodynamic limit so we pushed Heisenberg time to infinity so we can only access the linear ramp of course nothing else but it's extremely cool in times which are very short because for genetic models we would find the so-called tawless time effects effects which are non-universal for short times and only appear to satisfy random matrix statistics after a time which is called tawless time I mean related to some transport phenomena or whatever something non-universal in this dual geometry models in general there is no such thing spectrum form factor start to follow random matrix from time one onwards which means there is no timescale which means this is a critical chaos this thanks for asking thanks for being around the whole week and yeah thanks to Tomas again for the nice lecture