 There we go. I'm in the right spot, Phil, because I can't see anything. Thank you all for coming today. We are very excited to have Dr. Econmo here from Virginia Tech. Hopefully you had a chance to check out Sophia's bio and understand this interesting talk we're about to have here. We're hosting Sophia through our quantum collaborative, which is a new state Arizona state funded initiative. And I wanted to put this out there for anyone interested, staff, faculty, students. We're excited to have anyone interested in this complex landscape of quantum information science technology. Without further ado, you're not here to see me. If you do want to reach out my name is Sean Dudley chief research information officer, and now I'll hand it over to Sophia. Okay. All right, thanks. Thank you for the invitation and introduction. It's a pleasure to be here. Nice to take a break from the weather that's cooling off in the east coast. And you're nice warm weather here. All right, so today I'll talk about quantum technologies I'll give a little bit of a general background I decided last night to remake my whole introduction to make it a little bit more accessible so I'm not. I know the audience a little bit of a mixed audience probably people from engineering physics, maybe some people with no. Not much background in quantum. So that's what I'm assuming. And I'm also happy to take questions during the talk. So if you have questions, I was told that we can do that from the people in the room. And then maybe the zoom audience can ask later, but please interrupt me especially if you're in the room. All right, so when we talk about quantum technologies usually we mean these four different pillars. So the most well known is quantum computing. I'll talk about each, each one of these. Communications and quantum networks is also that is seeing a lot of investment and research efforts, quantum sensing and quantum simulation, strictly speaking, quantum simulation can be sort of thought. Thought of as part of computing but it's a really big field on its own right so we do tend to separate it out. So this is kind of the foundation of all these technologies are fundamental features of nature, and especially quantum mechanics. So these are kind of the ingredients that are common in all these technologies. One is that you need some kind of physical system just like in regular bits classical bits in our computer. There's some physical system that represents zeros and ones. So here, they're all of that displayed by a quantum system. So typically quantum systems are very small, and in some cases very cold. But for now we'll just think about it as a system that displays quantum mechanical properties, and these properties are quantum superposition. So we can assume like a bit in our existing computers, which can only assume state zero and one a quantum bit can assume a superposition state so it can be in a linear combination of zero and one. Arguably the most important feature of quantum mechanics that differentiates it for from classical is entanglement, and this is something you might have heard a lot about, especially following the Nobel Prize. And this amounts to class non classical correlations between quantum bits. Now I'll talk about that more in a moment. Another distinguishing feature of quantum mechanics compared to classical, our everyday world is that when we measure quantum systems we fundamentally disturb them. And measurement outcomes are probabilistic, and measurement is also irreversible. And this is a very important thing to take into account in these technologies. It can be a feature, and it can also be an issue we need to take into account when we think about algorithm. And then there's something called the known cloning serum, which says that you cannot copy unknown quantum states, and that is implications for information security. If you encode information in quantum systems, there's a fundamental sense in which that cure. All right, so let me expand a little bit more in the idea of entanglement, especially because of all the new stories following the Nobel Prize, some of which were disturbing. So, so I will try to clarify, and kind of distinguish from classical correlation. So you can think of a classical everyday kind of scenario where you have two boxes, and you hide an object in each box. And the person who hides them hides them according to some rule, and that rule is that if one is red, the other one is green. Right, but you can take these boxes or one of these boxes, you don't know which is which. And then if you open your box and you find the red object in then you automatically know that the other one's green. This is classical correlation, according to this rule. We made. Now, the sense in which in quantum systems things are different. The color property is not defined. So in the classical case, the color is already there. You just don't know, right, but it's already existing for quantum systems, this property is not defined yet. So each individual object does not have a well defined color. And that color is determined when you observe it. And then once you observe it and you see red, then automatically the other ones green. But you could have observed green in your box and the other one would be red. So this is something that's very different from the classical. Another difference is that this correlation persists not only if I ask, is this red or green, which was my rule. You could also kind of this is an analogy can also think of some other set of colors. I could also ask, is it orange or blue. And I just picked the color wheel so that I have a sense of a binary choice, the colors that are across in the color wheel, you can think of them as zero and one in the different so called bases. So, this is also important has important implications and security and communications. I'm not going to talk about my research and communication, but I just want to give you a feel about other technologies that I won't talk about in more detail. This is implications because someone was trying to use drop know if you and your partner. Ask is this. Or is it blue orange. So there is a sense in which, because these properties are not defined beforehand, they cannot be measured beforehand and exploited, and because also the person was trying to use drop doesn't know which kind of color direction you're going to choose. That kind of maybe hopefully gives you a sense of why you can have something secure there. And another important thing is that once you make the measurement and you do check what color, you know your cubits are, so to speak, then these correlations are gone. You can use them up. Okay, so quantum computing takes advantage of entanglement in a, in a quite non trivial and subtle way. It also takes advantage of constructive and destructive interference. And the types of problems that we know we can speed up are the following factoring stands out because it's the only problem for which we know there's an exponential speed up, compared to our classical algorithms. And the reason we care about factoring and in fact factoring you can know we talk about factoring but really what the quantum computer does is good at this period finding, you can give it a function, and it can tell you what the period is. It's quite non trivial to see how you go from that to to factoring. But that is implications for security because when we give our credit cards online and we put our passwords online. So the security of this depends on the fact that it's difficult to factor a period fine. There's other problems that are well known, another one is search sometimes we call it Grover's algorithm. This problem has a polynomial speed up. It's not as impressive as factoring in that sense. And then finally simulation, which we believe has more likely has an exponential speed up, but it's not something that we can prove in the same way we can for facts. Because there's a lot of hype around quantum computing never viewed a lot of bad proposals lately. I would also like to say what kind of computing is not. So, the first one I think is the most important point, quantum computers are not general purpose machines that speed up any problem. And I think that's a misconception, often, especially from people coming into the community, that if you have a quantum computer, you can throw anything on and I'll speed it up. So that's not the case we, you know, is that certainly no reason to believe that's the case and we don't believe that's going to be the case. People sometimes say is a quantum computer solve, explore many solutions in parallel. That would be really nice. If it could do that, then we could arguably go back to the first point and solve any problem. The tricky thing is that as I mentioned measurement is very singular very unique in quantum mechanics, quantum formation. So, when you measure your qubits at the end to extract some answers. You start fundamentally changing or disturbing that state. So you cannot really extract solutions that are in some sense done in parallel. And then there's also, you know, tricky things about polynomial speed ups in that you need a big resource overhead. So it's not always obvious how you, you know how that resource overhead interplays with the speed ups you get. You know, things that are subtle in this field and I just wanted to point them out. All right, the other pillar I mentioned is quantum communications. Sometimes people talk about the quantum internet. And as I tried to kind of allude in the beginning, there is inherent security in encoding systems and good information quantum system. It's kind of nice because quantum computers threaten the security of our information online. But then quantum mechanics can offers the answer to that and says, if you encode your information on quantum systems themselves, then you can take advantage of this inherent security. So you can do various things. Quantum key distribution is perhaps the most well known. You've probably heard about the satellite that China has put up, which performs quantum key distribution between different points on Earth at a really slow rate. But, but there's a proof of principle experiment with that. And there's other applications we envision for quantum communication networks, including synchronization and can enhancing sensing at a long, large scale. A very interesting application, which is kind of unique. So far as we can tell the quantum communication is the one on the upper right. Imagine that at some point, some company will have a quantum computer. A lot of people will presumably want to use a quantum computer, including other companies that have their own IP, and their own secrets, the government will want to use it. So you don't want to do what we do now with IBM, where we send our instructions, and people at IBM can just see what we're going to simulate. Eventually you will want your algorithm itself to be secret. And by transmitting the information in quantum systems, you can actually hide your algorithm in a way that you can send your quantum bits remotely to this company owning the quantum computer. They can run it according to your instructions, but they don't, they don't know what you're running. And you can even set it up in a way where they don't even know how large your algorithm is. And then, in addition to these applications which are more communications based, another reason, or another way that communications connects to quantum computing, is that the way we think about architectures of quantum computers is that we're going to have some modules that we need to connect in some way. Sometimes that's called distributed quantum computing. So a lot of the things that we do, as the community does in quantum communications carry over to this model of computing, where you break up your architecture into small modules and somehow you need to have quantum actions between these modules. And let me just also put the slide about what communication is not probably this is the most disturbing things you've heard falling on a bell price. Talking about faster than like communication that cannot be done. It's actually something that you, you hear a lot and I actually googled last night, I'm a faster than light, and the first thing I get is wrong. So, I was impressed by that usually Google is pretty reliable, but but not here. And let me get now to the third topic, which is what I'll spend most time on in the talk, which is quantum simulation. And the reason we care about this, especially, you know, people in science and engineering is that if we can simulate quantum systems with high accuracy and large enough. We can do things like create new materials predict the property of molecules chemical reactions, and that hasn't been applications and things like medicine industry various types of industries. Also from a fundamental kind of physics point of view, we can create new phenomena new materials. Hopefully things like room temperature superconductivity etc. And of course, the reason we want to use quantum computers to do that is that the electrons and molecules and crystals are quantum mechanical systems themselves. And because they are quantum mechanical systems and you can they're generally these entangled states, the size of the space that they occupy the configuration space scales exponentially with the number of elections. So, as a result, it's computationally hard to solve these problems because even storing the information takes a lot of space, basically, space that we don't have fundamentally we don't have in classical computers because of this exponential scale. And here you can see, you know, the number of cubits I want to store the state of on the left column, and the RAM required to store the state. And at the point of around 40 to 50 cubits you start hitting, you know, a wall of, you know, something that you will never have classical. So the idea is that we want to use other quantum systems, which naturally have this exponential scaling in their state to, to simulate the quantum interest, quantum systems were interested. Okay, so, since my big question. Okay, so, since I think I'm hosted by primarily the engineering college and department, I thought I would show a stack. So, okay, so you know this is a physicist this is my sense of a stack right in the bottom you have something that's a physical layer. This is your atoms, your superconducting circuits whatever your qubit is your photons. There's another layer on top of that where you need to control them. Then you have to somehow connect and envision how do you actually make an architecture of those there's a lot of things that you know physicists, I think know almost nothing about computer architectures. So that's a place where you know you need people with that kind of expertise but of course also quantum expertise on top of that you have algorithms and applications. So I'll say a few things about the different, a few of these different kind of planes in the stock. I'm not going to talk about architectures, or an applications, but I will talk about all of them. All right, so what kind of systems do people explore for qubits. This is just to give a little bit of a sense, especially to people who are not in the field to show you that there's a lot of different physical implementations. There's a lot of these has its own community around. And there's a lot of things you to develop, you know, something that's a good viable qubit, you need to understand the physics at a fundamental level to know how to control it to know how to combine it with other such qubits etc. All right, so one type of qubit is spins in semiconductor quantum dots. This is done early on Gallium arsenide now people are tend to be looking more at silicon for, you know, well, motivator reasons. Then there's chopped ions. So these are atoms missing electrons, so they can be trapped in electromagnetic traps, and it's kind of combining these long chains. Superconducting qubits, which is what IBM, Google and Amazon are using. And then there's other, there's also photons. And there's other systems that are also part of quantum technologies but not necessarily for quantum computing. So in this table, you know, I, so I highlighted which ones are the primary ones people are interested in and for quantum computing I did not put topological qubits here because I haven't put topological qubits yet. And then there's some systems are really good for quantum communications where the criteria are slightly different. So things like defects and solids. You might have heard of them is under in diamond have optical interface. Of course, optics is what you use to transmit information, classically as well. Yeah. Yeah, that's. So the question was that your quantum computer based on what I just showed you is most likely are possible going to be some matter qubit. Superconducting circuits, or chopped ions or some kind of matter, whereas communications are done with optics with photons. And how does the non cloning serum impact that. So you're not cloning information. So you can have two quantum systems, you can have a state here that you may not know. And you might have your blank, so to speak, qubit here right so think about this as a state of zeros, and then your quantum system comes from the network. The photons come from the network which carry this unknown quantum state, and then you can do something called the swap operation. So in this state, you still don't measure it. If you measure it you're destroyed you don't want to do that, and you don't clone it. So, in that case, the state is gone from these carriers of information that come in from outside. It's very non trivial, how to do that, and very difficult how to do huge amount of losses. So that's something that people are working on quite a bit. And they are looking at different paradigms on how to do it so one one way you can imagine is the thing you send in the photon you send in has a turn a frequency that matches some some frequency transition frequency in your system. That's it's unlikely that that will be the case, especially because you want the photons. Most likely you'll want your photos to be in the telecom band so you minimize losses. But most likely that you will have to have this step. Right, right. That is called transduction. And it's something that people worry about not only for communications, but also for this distributed quantum computing model. So for example IBM is interested in transduction because you know there's so much you can fit in one dilution refrigerator, and you need some connections. So if you have that step, you need your gigahertz frequency to become optical, and then back to gigahertz. One way to do it is you can somehow, you know, another thing, instead of this kind of release and catch approach that I mentioned we saw the state. You can also create entanglement between the two systems again you need some kind of communicator of that information. So I have to try to use the photons. But you can create entangling links between different parts, like, you know, to different, let's say modules of the quantum computer, and then the entanglement is a resource so you can sort of use it up to the state transfer. And that's called teleportation. Any other question. Okay. Yeah, so this, this is more I'm not going to talk about physical systems too much here but I just want to put it as an introduction just to show that there's a lot, a lot going on, and each of these is its own kind of community. And here I mentioned, I listed some of the companies. If I wanted to list them all you know I don't have space. They're going to eat up my round. So, now, just thinking further up the stack and talking about quantum states. I think a way to think about qubits if you're used to classical bits is the following. You can have a classical bit that's in state zero and one. We've already done this mental abstraction from the physical layer to zero and one. So if I have something that's either zero or one. I can think of it as a vector in two dimensions, right, probably doesn't help you much in the classical case, but nothing prevents you from doing right so if I have my my quantum my sorry my classical bits and state zero. I could think of that as the vector one zero. And if it's in one I can think of that as the vector zero one. Okay, so then what can I do to classical bit this the only gate other than identity doing nothing is that I can go back and forth between zero and one right which of course well known as a not gate in logic. Since I've, you know we've agreed I'm going to use vectors for my two states of my bit. Then I, what would translate one vector to another is a matrix. I can think of this transformation this not gate as a matrix. Again in the classical world, we don't have much motivation to do that, but you can set up your form. The reason I want to do this is because that way I can motivate more easily why we use linear algebra to represent quantum states and vectors right so quantum bits can be in superposition of zero and one. Now, these two only states that I can assume classically become a vector with these continuous parameters a and b. I do need to normalize this vector, I'm not going to go into this kind of detail, but the point is that there's this continuous sense of this parameter in the vector, and we can represent this on a sphere that we call block sphere, which is some sometimes you'll see it, either as a cartoon or people actually using it for some reason. Now because this vector can be anywhere. I can have operations on it that are more general than this not gate right, I can rotate it in this space where it lives in. And in this case this is some rotation parameterized by some angle theta. So quantum gates generally carry parameters. And you might have seen these types of things which are quantum circuits, and some gates don't have parameters, some do. So in this example here you see, this is our theta here which is something looks a little bit like this. All right, so because I'll talk about parameterized quantum circuits. When I talk about fun simulation, I just want to give you a little bit of a sense of what these parameters are. For example, I just have a quantum bit. So I have a rotation that we can visualize, right, I can I can see this in 3D, or abstract from that to 3D. But if I have many cubits I can think of some many cubit analog of that that also has a problem. Okay, so now you know we have our qubit we know what kind of gates we want to do on it. We want to somehow physically implement right so the same way that our classical computer has bits, and we send in some voltages. There's something analogous thing we do here. And the thing we need to do here is solve a control problem. So where we have a matrix essentially representing the, the energy of the system called the Hamiltonian, and this is a function of time, and the way I change this function of time is by sending electromagnetic fields that change in time. So by sending in these fields. I'm generating different types of evolutions of my cubit or cubits. And then when I'm these fields are switched off. So whatever has happened up to then is my gate. Okay, so this is just to give a sense of the physical implementation of a quantum gate. What's nice is that now with access to IBM that you guys have, you can actually go into experiments on your own, and even play with these fields at a low level. Not just do these parameterized gates that the system gives you ready, but you can program your own electromagnetic fields to control. So, in my group, we work sort of across different directions along the lines I discussed, we do quite a bit with quantum control quantum computing sort of closer to the physical layer. We're also have a research in quantum networks and photonic quantum information processing and quantum algorithms. You know, I, I, when you invited me, people seem to be most interested in the third so I'm gonna focus the remainder of the talk to that. And that lies here. Okay, so here's my outline for the remainder of the talk. I'll say a few more things about quantum simulation. And then I'll focus it more to the types of systems we have now and the types of algorithms we can develop at this point. And if I have time I'll talk to you about two different directions we're pursuing my group along those lines. Okay, so this is obligatory fine one quote who said that nature is quantum mechanical so you should use quantum mechanics to simulate it. And I would say that at this point quantum simulation is the most interesting application of quantum computers, it'll be a huge if it works. And it'll have a lot of, not only industrial applications but a lot of I think scientific applications as well. So, I think that when Feynman was talking about consulation is probably thinking more about something analog. And what that means is that you can imagine have a system that you can control really well. And let's say it's a bunch of atoms trapped in some potential. The thing you want to simulate is a bunch of electrons in a crystal. You could change the system that you can control well to recreate the Hamiltonian the operator represent the energy of the system you're interested in on to your quantum process. You could do and that people actually do. And that's called analog quantum simulation. And the reason you know you want to do that instead of an experiment is that you, you know, we have more control for your simulator. So you can change things that in a real crystal you cannot write you can change the spacings between your lattice sites. You could change how many electrons you have. You could change temperature interactions, etc. What's going to focus more on today is something called digital quantum simulation, where we will use these quantum gates I mentioned and quantum circuits. And in this case, there's no sense in which I'm recreating a Hamiltonian on my simulator. So this is quite different than analog simulation. And it's more general by assumption, because if I have, if I'm able to do any, any type of gates by combining and type of evolution combining these gates, then I can simulate any problem, whereas the simulator is a dedicated kind of way to simulate a problem of interest. So the first thing you need to do is to take the problem you're interested in, which could be this strongly correlated molecules shown over here, have a bunch of different atoms. Making it up. And you need to somehow map it on your quantum computer. So on the one side you have this naturally occurring physical system. On the other side you have this human made processor. And somehow you need to figure out how to do this map. So, let's go back and think about the chemistry. You know we we learned in undergrad. And first let's look at an atom, right if you have an atom. There's these orbits that we cannot think simplify a lot by thinking of these orbits but let's just think of these orbits. And this states that the electrons can take. And maybe these are familiar pictures from chemistry, you know this log. Kind of things to representing probability distribution essential. And one thing that we need to take into account is that electrons are so called for me. And what this means is that each of these kind of wave functions or probability distributions can only be occupied by one electron when I take into account something called the spin, or you can think of it as two electrons per per orbit. Now we can actually think about these orbitals these states that are available to the atom or to the molecule as the fundamental thing, and then the electrons just occupy them. And because I have this sense of it can either be occupied or unoccupied. I already have the sense of a binary variable, right for each of these states, either I have an electron or or I don't. And since this has this binary sense, and I can also be in a superposition. It really is like a qubit, which states zero one and can also be in a superposition. So a very natural way to map the problem is to map the occupation of these states on to each qubit on our quantum computer. So then each qubit on our quantum computer corresponds to one of these states of the out of normal. I'm going to show a more technical slide, just for the experts, which is essentially what I just described in a more hand wavy way with equations so we start with some Hamiltonian. And then we choose a basis choosing a basis just means I choose these blobs, these states. When I do that, then I can do a classical preprocessing step that allows me to write down this Hamiltonian operator. So I know these numbers that enter there. So this operator has numbers and operators, I can write it down. The difficult thing is solving. And I'll tell you what I mean by that. As I mentioned, the fact that each of these orbitals can be occupied or not occupied means that somehow the electrons in your physical system know something about the other electrons. So there's some correlations that naturally exist in your molecule, whatever you're saying. So the way we map this onto our qubits, because our qubits are distinguishable. We, we know, you know, in our field gets, you know, IBM system or some other matter qubit system you know this is qubit one qubit to etc. So you have to force your quantum processor to know, you know, to have this feel of each qubit knowing something about the other qubit. So these operators translate into very kind of the localized operators of the peak so operators that live on many qubits. So you have to pay for the fact that your simulator is fundamentally very different from the problem yourself. Okay, so we figure out the mapping. And then what do we do. Right. So in a perfect world, our quantum systems behave great. Our quantum computers, you know, are in a box. I don't need to know what's anything about quantum mechanics and just run things. That's not what's going on right now. We are still at a point where the quantum systems are really bad. So they're noisy. There's a lot of errors, and they're still not very large. So somehow we have to work with that. And the types of algorithms that I'll talk about today are focusing on these types of systems. There is, there exists also part of the community that doesn't care about this, and thanks, you know, in 2030 whatever years we're going to have a perfect one computer so all our algorithms will be developed for those. At this point where you don't have this well defined stack or we can really do that. So in some sense, you know the stack is a little bit of a lie. Because doing something useful really requires understanding enough of it. So, still, you know, it's sad that we have so many errors but we're still hopeful that if we're clever about things we can do something useful. And let's narrow it down what this useful thing is. So imagine I have a molecule, and I want to do something very simple. I just want to calculate to use my quantum system to calculate the ground state energy of that small. That's a classic problem in quantum chemistry. So, so the quantum process is not as powerful as we like it to be as I mentioned, so we want to leverage our classical computers which are quite powerful, and we want to somehow combine capabilities of both. So in these variational algorithms which I'll talk about, we're using a principle of quantum mechanics, called the variational principle which is used a lot also in classical simulation, which says that if I take a trial state on my simulator. And trial state means some state that's created through circuits shown over here, where you say I have these parameters that I mentioned earlier in my talk. And I changed these parameters, right, you can randomly change them or according to some recipe. The state I create cannot have lower energy than the ground state energy. By changing these parameters in a smart way, since I'm bounded below from the true ground state energy, if I circuit correctly, then when I cannot go any lower, hopefully I have found the energy of the system. Okay, so this is called the variational principle it's undergraduate textbook, you know, material we learn in in a quantum mechanics class. And the way this thing would work is I take the system of interest we figured out this mapping that I mentioned before. We prepare quantum states, and we make measurements, and we feed the outcome of these measurements on our classical computer, which performs optimization, and then we update these parameters. And the mental thing we need here is this parameter I circuit right so this tells me these are the gates I'm doing, and these are the parameters on very. And then by changing these and making measurements a lot of measurements actually hopefully we can get the answer. So there's been experimental demonstrations of this. This is an early paper from about five years ago from IBM, which was the first time something that looks, you know, quit like the real thing was shown upper row the three molecules shown over here. So this is simple molecules you can simulate those on your laptop. So this is more of an engineering achievement, rather than a quantum simulation achievement right these are things we already know. But it was pretty impressive, especially, especially for people who've been in the field for quite a while to see this, you know, done on a quantum processor. So this was exciting, at least to me. And then there's other work since then the Google group has also done quite a bit of work with more than 10 qubits and this might not even be up to date because it's a very fast moving field. What I like is to go from these toy molecules which we know how to solve to something we don't know how to solve that is too large to solve on our laptops or supercomputers. And how do we do that. Well, of course we have to work both on the experimental theory side on experimental side we need better qubits, obviously, on the theoretical side we can improve our algorithm. So the thing I'll focus on for the remaining maybe five ish minutes that I have is the work from we're doing in my group for how to do the state preparation. Okay, so everything I'll talk about is about the circuit I showed you how do you, what do you do to choose a good circuit. So here's what people had done before. So these simulations I showed you what they use is something called the hardware efficient answer. So this kind of goes back to this idea that our qubits are pretty bad. So we're just going to do whatever is easiest for them to do. And what's easiest for them to do is just to do the single qubit gates, which are given by shown by these boxes like only cross online. And that's where all my parameters line. So these are rotations in the space I showed you. And then I have some kind of entanglement which I do need to build up denoted by these gates across different lines. And I just stack three. If I make the circuit, then the whole is that I explore enough of the space where the solution lives, and that for the righteous of parameters, I've actually found the solution. And that's, there's no say hope is that you are really hoping that's the case. Another thing you don't know here is when do you stop. This is a layered sort of gates. It has this repeating pattern. So there's no nothing here that tells you when do you, are you happy enough that this is your circuit and you parameterize it, and you optima. So another, and I should also mention that there's issues. Also with optimizing this. So if you come from kind of a machine learning background, things like trainability have been shown to be hard. Another thing you do is take some sets, some state preparation circuit that's more natural to the problem you're trying to solve. So you can ask what are people doing, what are chemists have been doing for, you know, decades and for simulating problems on our classical computers. You can look at that to get inspiration. And you can create circuits that are kind of trying to mimic that. It turns out if you do that your circuits just become too long. So it's very impractical to do that. And we've also shown that at this kind of when you're trying to make your circuits shorter, you leads to inconsistencies. Still for small problems. This has been done experimentally but it's not the way to scale up. So what we have introduced with collaborators at Virginia Tech is the following. Not just change the parameters in your circuit, but change the structure of the circuit itself. And what we want is we don't want to use too many quantum resources, because we are systems are bad. We only want to add as many operators as we need. As we keep adding more operators, our systems degrade. So at some point if we add too many, at the end we're just going to get garbage. Our simulation will now be alive. And then we also want to encode into this circuit something about the problem we're simulating. We don't want to use something generic, which is this hardware ansatz, I showed you hardware efficient ansatz, it's very generic. It doesn't matter what molecule you're simulating, it's the same ansatz. What we want to do here is get inform ourselves somehow about the problem we're trying to solve and create a good preparation circuit. So the main insight we had is to make the algorithm adaptive. So don't fix the circuit up front, but create it as you go in tandem with the information you extract from your quantum computer. So in a pictorial way, you can think of it this way. On the left I have kind of, you know, symbolize my circuit. So I start with some short circuits, some simple gates. Then I can add more, and I can add more until I'm done. So this raises two questions, right? How do I decide what to add? And how do I decide that I'm done? So there's these two ingredients that enter this algorithm. So one is what are my options for these use, use of jays, right? So my options for these we call this an operator pool. It's the gates that are available, or that I would like possibly to implement on my quantum computer. I can parameterize them and apply them on some reference state. So we call that the operator pool and take some skill on how you decide that operator pool. But then once you've decided it, the next question is, how do I decide which one to add to grow my circuit? So I need some kind of criterion that tells me what do I do next. The criteria we've used so far, we introduced on you so far, is to ask which of these operators if I added would change the mean value of the energy the most. So you can think of this gradient is telling me which direction and this circuit creation space is the most important. And the nice detail about this is that I can recast this as an expectation value, which is the thing I can measure on my quantum processor. So I can make all these measurements directly on the hardware. So I can use the quantum computer to tell me my corner tells me what to do next. All right, so we call this adopt VQE. It stands for adaptive derivative assembled problem tailored VQE. This is just the flow chart that tells you you have this operator pool. You use the gradient information, which can be parallelized to tell you what operator to add next. You add your operator, you re-optimize all the parameters in your ansatz, and then in your state preparation circuit, and then you keep going. Then you add another operator, again, followed by this gradient criterion, according to this gradient criterion, and you keep going. So you have to encourage, presumably, when all these gradients are zero. So when all the gradients are zero, there's no preferred direction to go anymore and install. In reality, nothing is zero, so we need to modify this criterion somehow we can make that we can ask that the gradients are smaller threshold, or we can say the energy should not change more than some quantity. So there's many ways you can choose them. So let me show you some simulations we've done. Everything I'll show you is done on classical computers. We are working on doing some of this on hardware. So these are the association curves like chemists plot for various molecules. So the first two are lithium hydride, beryllium hydride is what IBM actually did on their hardware. And on the right is hydrogen six. These are fictitious molecules, chains of hydrogens and chemists like to use them to benchmark algorithms. They're strongly, they have strong correlations, and they because they've been used in different as benchmarks, they're actually useful to use here as well. Okay, so here we're comparing to this chemical inspired ansatz, which is the orange curve which you don't see yet because everything falls on top of each other. And then our algorithm is what we call adopt for three different threshold. So as you tighten the threshold, you should get better results. Because everything here falls on top of each other, I'm going to change the scale on the y-axis and plot the error instead. And I'm going to plot in a logarithmic scale. So here, because I can solve these problems exactly on my laptop, I know the solution. So that's what I'm comparing. So here you see the orange curve is kind of what was state of the art when we wrote this paper. And you see that we can easily beat that by tightening the threshold and adding more operators to our state preparation. And then, you know, as we keep doing this, of course, the problem should be better, because I have all the same operators plus some. So you could ask at what cost do you do that, right? I made a big fuss about not having to use too long state preparation circuits. So this is what we're looking at here, same x-axis, which is the inter-atomic distance. And here what I'm looking in the y-axis is how many parameters I have in my ansatz. You can roughly think of that as how many operators you have in your ansatz. So the orange curve is flat, because this is a fixed ansatz. It's a fixed state preparation circuit that has a fixed depth. And all you're doing is changing the parameters. So this is what we were comparing to. And then you see that in our algorithm, the resources you need are much reduced. So almost everywhere across these different plots, the algorithm I showed you requires fewer parameters in the ansatz and better energy estimation. Okay, so I think I'll skip the next slide. And then I just also want to make a point that we have strong evidence that this way of doing things actually leads to better trainability. So here what we're looking at is adding operators according to our algorithm, and again looking at the error in the y-axis. And what changes as you move across the y-direction is where I initialize my parameters. And our algorithm tells you you should initialize them at their previously optimized value. As you add operators to your circuit, keep the parameters as they are at that point, and then optimize on top of that. If you do that, you see that you're sort of insensitive. So the energy should be as negative as possible. The ideal solution is the blue curve. You want to be as close to that as possible. And the green is what comes out of our algorithm. So the point is that this creates a really bad landscape which you're hovering you into. So you see that there exist many local minima. That's what the colors are. But by doing this sort of clever initialization and adding the parameters one at a time, adding the operators one at a time, you sort of don't see this mess. So the way we think about this is we're not yet at the over parameterized regime where there's all minima or good minima. We're in a situation where we're kind of using this grading criterion to dig into the parameter landscape and find or create a good minima. So the last thing I want to show you is a recent work we did where we've generalized this algorithm. Instead of adding just one gate at a time in our circuit, not as many as fit. What does this mean? I add a gate here. Let's say it touches three cubits. I can still add more gates because my processor has more cubits. And the way I select the other gates to add, I go and I add the gates that have the next highest gradient that don't touch the cubits I already touched with the first. And actually, especially for the IBM systems and superconducting cubits, idling is bad. So actually when you don't do gates on these cubits, they behave worse. So there's, there's a motivation on top of, you know, hardware come from the hardware to do gates in parallel if you can. And we call this styling efficient trial circuits with rotations implemented simultaneously. Very contrived way to come up with the acronym Tetris, which if you're kind of my generation, you might have spent some time playing in the 90s. All right. So we've shown that this strategy leads to much more shallow circuits like we show pictorially here. And you can get this reduction in circuit depth without paying a price of more C nodes, more entangling gates, which is kind of your currency. These are the ones that are the noises on the hardware. Okay, so I think I'll show you the slide and stop. We've done a lot of other work with this adaptive algorithms. We've worked on optimization problems, shown that you can get improvements. We've collaborated with nuclear physics people. It's applied to nuclear physics problems. We're moving toward more kind of solid state systems and looking at spin Hamiltonians. And then we're also interested in open quantum systems. And especially give state preparation so trying to create states that are not at zero temperature in a, in a, in a similar systematically. So with that, let me skip these control based work and just summarize. Hopefully, you know, you got a bit of a sense of how quantum simulation is, is pursued, especially at the near term. We are, you know, very interested in trying to help answer the question, can we really get quantum advantage in this noisy regime. It's a very important question in the field. We're, you know, doing what we can to deal with that. And our solution is these adaptive algorithms that at least in classical simulations have been performing really well. And we also have a sort of control based approach where you go on layer down in the stack and control your electromagnetic fields directly. All right, so with that, I'm highlighting the people from my group for working on quantum simulation, there's another set of people who are working on the other topics. And also I want to acknowledge at Barnes and Nick may have who are the two other faculty we collaborate with on these topics, and of course our funding agencies and you for your attention. Yeah. Okay, so the question was, if we consider the open quantum system in our simulations, let's say in the molecules, and there's a way to do that if we're not doing so or not. Everything we do is that zero temperature ground state closed quantum system. Yes, there are ways to do contemplation where you do take the environmental account, you need to use some on Sylla the closest we've done is this give state where we don't have an explicit quantum mechanical environment. But we represented with some temperature. And then what we want to do is create a state that's a temperature T where you can vary T and have a state preparation circuit that represents that. And then you can have problem or, you know, problems that have interactions with fun. I think not enough has been done on this direction is very interesting direction to pursue. Yeah, yeah. Yeah, yeah, I think that's, that's an interesting, interesting thought. You could do everything, you know, then so on unitary gates, but you could think and I sort of do like an analog piece to it. And I think that'll be very interesting. Yeah. Yeah, I haven't seen anything like that that I can think of right now, but it being interesting direction so you could treat your system qubits, you know, in a similar way like we do and then. Yeah. Yes. Okay. Okay. So when I said all problems can be solved. I meant all quantum mechanical simulation problems. Also, quantum computer has a classical limit. Right. So if I force you bits to only be zeros and ones and mimic classical bits. In principle, I can perform any simulation I can do on if I have a good enough computer but of course, at this point we don't see why you would want to do that right you can barely do. And I'll really do yet the things that quantum you're supposed to be good at. You wouldn't want to, presumably, I don't know I mean things you know change a lot over decades but presumably you want to use a quantum computer to do classical simulation so when I said any problem. I meant any quantum simulation problem. So, so we don't, we don't know of algorithms that give you speed ups, you know finding algorithms to do interesting things is a huge area open area of research. So, you know, that I'm saying now that these are the only problems these are the ones we know, right. Quite possibly, there exist other problems we haven't discovered yet that can be sped up. Right so finding out what problems you can actually get speed ups for in principle right forget the engineering and the noisy cubits and a lot of have a perfect quantum computer. What problems can be sped up there is an open question. People of course are trying to find new types of problems that you can do. But from an engineer point of view, as I said you could use your computer in a classical mode doesn't, I don't see why you would want to do that. Maybe if they're infinitely powerful, and you don't want to network them classical computer and you want to do everything within the quantum computer. It sounds like science fiction to me at this point but you know, people didn't know what lasers would do, you know, 50 years ago right so. The third answer is that finding identifying the types of problems that can be sped up and the, and coming up with algorithms is really hard. That just adapt. Yeah, so that. Right, so, so the question is if the debtor so that this we think is better than that. So, I think changing the years as you go in this analogy with machine learning, we are, I think adopt already does that. So I think that I'm not an expert machine learn but I think the closest way I can think of it is have a neural net and with every iteration I might add the layer. And I might also be able to grab a new function that connects the layers. So I think that's an analogy to what adopts does. And this analogy is adding, you know, more kind of in the in the y direction right more. And so far, at least for all the chemistry simulation problems who were convinced that that risk is better for these two reasons one you can condense the circuit a lot it's a different circuit that comes out and condense it a lot. And to for certain types of qubits like superconducting qubits, you want to do that from a hardware point of not all qubits. And the shop ions actually are not good at doing gates and barley. So, in some case you might not want to do it. It's very implementation dependent. And that's why I mentioned, you know, the stack is still not really, you know, we cannot abstract the way because we really need to worry about what kind of simulator or compressor are we using to do all this. And I have one recent example that I only one student from my group has done and I need that to see to be reproduced to become as recently. They found one example where that is that the adopt is not do as well as that. But that's very, very preliminary and I, you know, take it with a grain of salt it's actually results from yesterday we're rushing to submit something so that's for the Gibbs state problem. So I don't know if there's something different about Gibbs or not or if there's just, you know, some optimization issue not the optimization not converging or the student doing something wrong or So it's a very new result that I might not hold so I'm not so far I would say with certainty everything we've done thoroughly that this is better. Yes. And in the adapt VQ e algorithm that you outlined the processes for selecting operators seem greedy. Is it guaranteed that the operator you select will still be important when you select more. Yeah, that's a very good question, and you're right. It's greedy in the sense that you're optimizing locally. And no, it's not at all guaranteed. And in fact, I would say it's not guaranteed. Most likely won't be the globally optimal answers. And with that, so far as we can tell now you need some kind of exponential scaling way of doing it. I don't, I mean, you know, the brute force ways just test every possible, you know, combinatorially many options and then decide. But of course that's, you don't want to do that, because it doesn't scale well at all. One more question in the chat. On the VQ e front. What do you think a successful proof of quantum advantage experiment looks like what characteristics would it need to have. So the question is how do you. I don't think I see that but I think the question was, how do you define quantum advantage. The question is, what do you think successful proof of quantum advantage looks like that's that's also something that's not easy to answer. So, you know, to show one advantage you need to solve something you cannot solve classically, but if you cannot solve it classically how do you verify. Right. So, I would say there's a few different things you can do right for some problems there's a natural way to grow them. So you might have you know these chains of hydrogens right. You can solve the four you can solve the six you can solve presumably the eighth and so on. And if you keep increasing the size and everything checks with your classical computer then you have reasons to trust, hopefully that you're not in this extra. Cubits growing the problem will still that's of course not proof at all. Another thing you would hopefully do is run the same problem on two different very different quantum processors, and ideally even different types of qubits and get the same answer. Another thing you could do is probe some properties of the state you're creating and compared to experiment. So there's, I would say a lot of proxies you can use to convince yourself. You cannot prove it in the strictest. And I think there's another question by optimization do you use quadratic and constrained binary optimization Cuba yes. So, yes, that's that's what we use you can map these types of problems onto diagonal Hamiltonians, sometimes called easing or rising. And then you can just do essentially quantum simulation the same way I described only the only differences that your Hamiltonians now fully diagonal, which is a bit of a detail but you can treat it the same. And others. So if you're interested, as I said, in exploring these these topics, we have the capacity for you to do so and a lot of experts also in the mix. So hope to hear from some of you. So guys it's fun taking a moment again to thank Sophia for joining us today. Thank you very much that was a wonderful presentation.