 physics. I forgot. Yeah. Yeah, I'll use that. Okay. Thank you. Okay. So thank you very much for the invitation. I'm really happy to be back to ICTP. And so let me see if I... So yeah, so I want to tell you about this machine learning perspective and what I mean by that. And so I was at D-Way for a while and perimeter, but now I moved on to this place called Vector Institute since a couple of weeks ago. And so let me tell you what this place is. So it's an independent non-for-profit collaboration between the universities, the government, and businesses to develop AI, right? And so these are vector faculty, most notably Geoffrey Hinton and Rich and SML, my boss, basically. And so I take care of like the quantum physics aspect of AI. And so part of what I'm going to tell you today is about that, like what's my perspective or like my take on what are the implications of quantum physics in AI and what can AI do for us in physics. And these are my machine learning physics friends. So let me read their names. So Roger Melko, Jacomo Torlai, Peter, Simone Traps, Essen, Kelvin, Giuseppe Carleo, Matiestro, and Guilielmo Mazzola, who is here in the audience. And so let me be pretty... Let me just give you like a little introduction using a very formal description of the complexity of the many body problem in classical and quantum physics. So one of the... I think the first postulate of quantum mechanics is that a generic specification of the quantum state requires resources that grow exponentially in the number of degrees of freedom. Like for instance, if you want to write down the way function for a spin system, a spin one half system, you need in general two to the end coefficient. So that's a lot. And so as we know, and this audience doesn't need a reminder what the exponential growth means. But let me just tell you that today's best supercomputers can solve the wave equation, Schrodinger equation exactly for only a few particles, like 45. And for that, you need big, big computers. And just to emphasize how tough these problems that we deal with are, is just telling you that for storing the wave function of a 273 spin one half system, you require computers with more bits than there are atoms in the universe. But we're interested in problems that are even bigger, as we all know, and they're relevant in chemistry and condensed matter and quantum computing and they're way larger than 273 spins. Then we have quantum computing, which may help us solve some of these problems, but that's still under development. We still have to figure that out. And so we still have to rely on classical algorithms, basically. And so this is because nature is sometimes compassionate, even though nature may not care about what we think. But many body systems can be typically studied or characterized by an amount of information which is smaller than this maximum capacity, like this exponentially big systems. And we do exploit that idea in like quantum Monte Carlo approaches where you do like a systematic sampling of the most important regions of this helper space. And so that's how we are successful in applying these techniques because we like this particular region of the helper space where things are more relevant and so on. And also because of the nature of the entanglement properties of this many body systems, we can take a very generic wave function such as this one that I'm drawing here with diagrams and use, for instance, matrix product states and density matrix normalization group and get an understanding of many body systems. And so that's part of the techniques that we typically use. But I guess what I wanted to point out, which was already pointed out in the previous talk, is that the machine learning community deals with equally high dimensional problems. The problems that they deal with are exponentially big. But they still battle this curse of dimensionality pretty successfully and with impressive results in a wide spectrum of scientific and technologically relevant research areas. And so that's kind of like what we thought. And then indeed, quantum and classical many body physics has not been the exception. There's lots of activity going on right now. And like in phases of matter, constructing wave functions out of these techniques inspired by machine learning, acceleration of Monte Carlo and molecular dynamics and quantum state preparation, there's a lot of activities. It's almost impossible to list. So let me just tell you like a little bit of what I've done in this area. And so I'll discuss some of these applications of machine learning in many body physics. So one of them is like a supervised learning approach to classical phase transitions and icing models. So that's kind of like the simplest thing that I started with. And then let me just try to briefly mention two quantum systems, like two applications of these ideas to two problems. And one of them is the interpretation of the wave function as a neural network, where we write down the ground state of the Toric code using convolutional neural networks. And then finally, very, very briefly data intensive problem in quantum mechanics, which is that of the quantum state tomography, again, using neural networks. So that's what I want to tell you today. So let me go very fast because time is short. So supervised learning of phases of matter, okay? And so what do I mean by that? So I'm going to introduce the idea using the simplest problem, the icing or easing model. And so we all know, but let me remind you, this is the easing model. It's a classical system of variable sigma that take values plus or minus one and they have, you have this energy function. And so at low temperature, what happens is that in order for the system to minimize this energy, just the spins basically polarize up or down. But as the temperature increases, the system transitions from a high temperature phase, a low temperature phase to a high temperature phase where the spins are disordered and they look random, okay, to a paramagnet. So there's this for magnetic transition. It has this order parameter, which is basically the average over the spins, which is finite at low temperature and it transitions around to 0.26 toward this paramagnet as studied analytically by on Sagrange in the 40s, okay? And so that's my system. So what do I mean by machine learning and supervised learning and so on in this context? And the idea is that or the inspiration came from me trying to understand this fluctuations in handwritten digits. So then machine learning, they have this classic problem, which is that of recognizing digits, handwritten digits by high school kids. So they have this gigantic data set of handwritten digits. And then what they proposed or they do is the machine learning community developed this powerful techniques based on neural networks. Again, where you take an image, which is this like high dimensional vector that any camera in our phones could take. And then this neural network recognizes this as a five or zero or one or two and three. And then when I was learning about this, I thought, okay, this is a five and which can be written as a kind of like a perfect five or mean field five plus some fluctuations induced by the way we write, okay? And then that kind of like, I thought, oh, that looks like when we do a mean field calculation where we have a mean field cartoon of the face and then you add thermal fluctuations or quantum fluctuations. And then I thought, oh, I could do the same in principle if I can take snapshots or images of these faces. And then so that's exactly what I did. I took this technology that they developed for recognizing digits and then basically applied to recognizing different configurations at low temperature and high temperature, okay? And then so basically instead of recognizing digits, I recognize, oh, that looks more like a fair magnet. And this one looks more like a paramagnet, okay? So it's pretty simple, but that's how I started understanding machine learning. And so what I did was I took the 2D icing model and I simply generated a big bunch of configurations drawn from the Boltzmann distribution. And at low temperature, 20K samples at high temperature, 20K. So here is like my dataset plot in a two-dimensional way. So at high temperature, you have this big blob of configurations. And at low temperature, you have two blobs that correspond to either spins mostly up or spins mostly down, okay? And in between, there's some sort of phase transition. And so that's my machine learning setup. And these are the results from this neural network that I train using this system. So you have two neurons, one neuron I call the low temperature neuron. The other one I call the high temperature one. And at low temperature, there's the blue one that is very active, right? So it's near one. But as I cross from low temperature to high temperature, then the cold temperature deactivates while the high temperature one activates, okay? And that reabsorbed happens near the critical temperature, right? As you would expect because this is the point in which you start losing the magnetization and then like in this configuration becoming more and more random. And then so we did also a finite size scaling analysis of this, the neural response if you want, of this neural network. And then we even were able to do finite size collapse and even extract critical exponents, meaning that this somehow, this learning procedure preserves this universal properties of the phase transition, okay? And then perhaps this was the surprising part for us and was that so we did all the training of the neural network on the square lattice, okay? So everything more or less pretty simple on the square lattice, but then what we did was to, okay, let's test if this thing is learning something meaningful. And so what we did was we ran simulations on the triangular lattice and then we tested these configurations with this neural net that we trained, okay? And then what we saw is that, so notice that here is 2.2 roughly the critical point and then when we run this triangular configurations through the trained neural net we were able to pinpoint the critical point of the triangular lattice, okay? So that was interesting and again we were able to extract the critical exponent and but then we came up with an analytical understanding of what the neural net did because typically people say or some people tend to say that they're pretty black boxy, but we wanted to open it a little bit and we did and we found out that what the neural net does is just to actually compute the magnetization as we would, right? And so we developed this analytical model, very simple and so on and then so that's why this going from the square to the triangular works effectively, okay? And then we even tried changing dimensionality and still works, okay? So if you can, so I don't have the results here but you can also run the three-dimensional Ising model through the same neural net and you still get the critical point of the of the 3D basically because all those transitions have the same order parameter, okay? So that was pretty neat we thought but then we thought we okay so these are pretty easy to recognize you don't need an algorithm to tell you this is a ferromagnet or this is a paramagnet, right? Like you see you recognize this by eye basically and you so this is kind of like okay it's interesting but can we do something more like non-trivial in some sense and we ask can we deal with disorder anthropological phases which perhaps do not even have an order parameter so and when you have this type of phases they're harder to recognize and they're important technologically like fractional quantum Hall effect and quantum spin liquids gauge theories and they all have potential applications and so on but also Coulomb phases they're highly correlated spin liquids typically described by electrodynamics and even common water is is one of them then spin ice materials so we worked with those two examples today I'm going to tell you about this icing gauge theory, okay? So it's this gauge theory from the 70s as Wagner's icing gauge theory so it's Hamiltonian that is very simple it's classical it's basically defined on the square lattice but the spins live on the bonds of the of the lattice on the Hamiltonian it's just a product over the four spins on the plaquettes okay and then the interesting thing is that the ground state at zero temperature is not there's no order okay and the spin spin correlations decay exponentially fast and so the same way as they do at high temperature at infinite temperature basically so this is the phase diagram of the system so there's a constrained ground state classically disorder and there's topological order as shown in this paper by Claudio Chamon and Claudio Castell-Novo and then there's the high temperature phase which happens right at any temperature basically so but let me show you why this is difficult for machine learning and for our eyes so one of those two is ground state configuration and the other one is high temperature so you want to guess maybe so it's a little bit harder right like because there's no order parameter which is what we see very easily but in here both in both configurations and both phases the correlation functions decay exponentially so you can't tell as easily right so so this one is the infinite temperature and this is a ground state okay so okay so so and then we tried the same feed forward neural network that I was showing you before and we weren't able to to train this model successfully so we which leads to 50 percent accuracy which is what you would do if you guessed right like because there's only two phases if you start guessing then sometimes you say oh this is ground state and you're right but half of the time you're wrong okay so but then we trained a convolutional neural network which is if includes more information up more up prior information of the system so now the neural net knows that you're in two dimensions and that there's locality right that spin like a one spin here is connected to the one on top and below and on the right and on the left so you you're including a little bit more of like an inductive bias and then we're back to having a good accuracy so we we train so that was nice but we also wanted to understand what's going on analytically and we were able to drive like a simplified version of the neural net that we that we train with our minds right like so we basically know oh we figured out what the what this thing is doing so we just simply wrote it down analytically and what it does is just comes and to each plaquette it checks the parity of the plaquette meaning that it checks whether the product or the force pinces one such that the energy is low okay so that's what this train neural net does and this is what my analytical one does too okay and then and then okay so then we get 100 accuracy and so on we were able to recognize these phases and and it works successfully but let me move on because I have only one minute now that to the so I won't I won't be able to discuss the whole thing so I'm going to show you just one example of the quantum mechanical systems and it is the kit axe quantum error correcting code with with this convolutional neural networks which is basically taking this toy model or this analytical neural net and realizing just realizing that that the cold neuron of this neural network is the ground state of the toric code okay so the toric code is basically this this Hamiltonian here which is this Wagner using a gauge theory plus this term that induces quantum fluctuations defined over the vertices of the lattice okay and then so when I was trying to understand this system what I saw is that when the people using tensor networks write down ground state they use this expression okay and that's exactly what I encoded when I was playing with this analytical model so in the end I ended up concluding that the ground state of the toric code on top of being a peps or a tensor network written in this form it can also be a neural network so that was one of also of our results and and so that was kind of like the inspiration for for us like to start using neural networks as ground states of many body systems and but we were not alone then Giuseppe was also ahead Giuseppe Carleo who wrote a very nice paper and many more okay and so with that let me conclude I won't be able to go through my second example but let me conclude so we encode and discriminate phases and phase transitions conventional anthropological using machine learning technology or neural network technology and we understand those analytically which is nice we wrote down some ground states using neural networks and this is I didn't show which is using actually neural networks to do quantum state tomography I guess I'd like to invite everybody like because I think there's potential for discovery using machine learning and using it for physical problems and so on and with that I conclude