 quantum computing talk by Andreas, who gave a talk exactly five years ago, and it's almost exactly five years ago. It's like one year and two or three hours. And he gave a talk at 31C3 about quantum computing titled, Let's Build a Quantum Computer. And I think back then we basically had just found out that Google was planning to partner with the University of California in Santa Barbara to try to build a quantum computer. Of course, now we're five years later. We've had a lot of developments, I think, in the field. We've had some big announcements by Google and other groups. And Andreas has now come back to give us an update. So please welcome him to the stage. Okay. Hi, everyone. So I'm very happy to be here again after five years of giving the first version of this talk. My motivation for giving this talk is quite simple. I was often, so I did my PhD on experimental quantum computing from 2009 to 2012. I left that field afterwards to work in industry. But always people would come to me and would ask, hey, Andreas, did you see this new experiment there? Did you see you can use quantum computers on Amazon's cloud now? Did you see Google has this new quantum thing? Is this really working? Can we use quantum computers yet? Why are you not working on this? And I couldn't really answer the question. So that's why I said, okay, I want to go back to this and find out what happened in the last five years since I finished my PhD, what kind of progress was made in the field, and do we actually have quantum computers today that are working already or are we not yet quite just there? So we want to do it like this. I want to first give you a short introduction to quantum computing, so just that we have a common understanding of how that works and why it's interesting. Then I will show you a small example of experimental quantum speedup, notably the work I did with my colleagues in Saclay during my PhD thesis. Then we will discuss some of the challenges and problems, why we were not able to build a real quantum computer back then, and I will discuss some approaches that have come up since then that would basically allow us to do that eventually. And then we will of course discuss Google's recent experiment in collaboration with the University of Santa Barbara where they showed basically a very impressive quantum computing system with 53 qubits. We will look exactly, try to understand what they did there and see if that's really like a quantum computer in the real sense already or if there's still something missing. And in the end, of course, I will try to give you another small outlook to see what we can expect in the coming years. So in order to talk about quantum computing, we need to first talk about classical computing, just a little bit. You might know that classical computers, they work with bits, so zeros and ones. They store them in so-called registers. This here, for example, is an example of like a bit register. Of course, the bits themselves, they're not very interesting, but we have to do stuff with them so we can compute functions over those bit registers. That's what like a modern CPU is doing in a simplified way, of course. So we take some input bit register values, we compute some function over them, and then we get an output value. So a very simple example would be a search problem. I will discuss this because later we will also see in the experiment how we can use a quantum computer to solve this. So I just want to motivate why this kind of problem can be interesting. And it's a very silly search function so it takes two bits as inputs, and it returns one bit as an output indicating whether the input bits are the solution to our search problem or not. And you could imagine that we have a very, very complicated function here. So, for example, a function that calculates the answer to life, the universe, and everything. Well, not a complete answer, but only the first two bits. So really complicated to implement and very costly to execute. So we might think that it might take like millions of years to run this function once on our inputs. So we want to find the right solution to that function with as few function calls as possible, of course. Overall, there are four possibilities. So four input states, zero, zero, one, one, zero, and one, one that we can apply our function to. And only for one of these states, the zero, one state because the answer is 42. So that's zero times one plus two plus some other stuff. So the first two bits are zero, one. For this value, the function returns a one. For all of the other values, the function returns a zero. Now, let's think about how we can implement a simple search function. And in principle, if we don't know anything about the function, so we can imagine it's so complicated that we can't do any optimizations. We don't know where to look. So we have to really try each of these values in sequence. And for this, we can have a simple algorithm. So we can start initializing our bit register with zero, zero value. Then we can call the function on that register. We can see what the result is. In this case, the result would be zero. If the result would be one, then we know, okay, we have found our solution. So we can stop our algorithm. But in this case, the result is zero. So we can just go back to the left value and to the left step and increase the register value to go to zero, one, and try again. And in the worst case, depending if you're optimistic or not, we have to do this three or four times. So if you want to really be sure that we find the right answers, we have to do it four times in the worst case. And this is, so to say, the time complexity or the computational complexity of the search. If you imagine that in our algorithm, the most expensive operation is really calling this function F, then the calling time or the complexity of calling this function will be what dominates the complexity of our algorithm. And in this case, the complexity is very similar, simple here, because it's linear in the number of the search space. So if you have n states, for example, in our examples, we have four different input states, we also need to evaluate the function four times. And please keep this graph in mind, because we're going to revisit that later a bit to see if we can do better with a different paradigm of computing. And so, classically, this is really the best we can do for the search problem here because we don't know anything else about the function that would allow us to optimize that further. But now the interesting thing is that we might imagine that we don't use classical computing for solving our problem. And in fact, the discipline that we call quantum computing was kind of like inspired by a lecture or like a seminar of Richard Feynman, who thought about how it would be possible to similar and or if it would be possible to simulate quantum systems on a classical computer, a Turing machine, if you want. And he found that because quantum mechanics is so complicated for classical computers that it's not possible to do that efficiently, but that if you would use the laws of quantum mechanics themselves to make a computer, like a quantum computer, then it would be possible to simulate this quantum systems. And this kind of like sparked this whole idea of using quantum mechanics to do computation. And in the following years, they were not only solutions found for simulating quantum systems, which such a quantum computer, but also for solving other not related problems to quantum computing. So like search problems or factorization problems, for example. And quantum computers can do can do computation faster than classical computers because they have several differences in how they work. So one of the key differences here is superposition, which means that if you use a quantum computer instead of a classical computer, we cannot only load a single register value into our bit register. So for example, the first value there with only zeros, but instead we can kind of load all of the possible state values at once. So in parallel. And this is so called quantum state or quantum superposition state where each of these values here has an amplitude which is shown on the left. That is basically a complex number that relates them to the other states. And if you have like, for example, n qubits, then the total number of qubit states can be very large to the power of n. So you can imagine that if you have a large qubit quantum bit register, then your number of quantum states can be really, really large. And this can be very powerful for computation. So in the rest of the talk, we're going to just indicate this by like showing the register as like a small rectangle to indicate that it's not only a single value in there, but that we have a superposition values of all the possible input values to our function, for example. And there's a condition and so called normalization condition that puts some constraints on these amplitudes because the sum of the squares of the absolute values of these amplitudes needs to sum to one, which basically means that the entire the probability of each of all of these states together needs to be 100%. And this is the first ingredient that makes quantum computers interesting for computation because we can basically implement any classical function that we can also run on a classical computer on a quantum computer. The difference is that we cannot only run it for one value at a time, but we can run it then on a superposition of all possible input values. So if you want, you have like this massive parallelization where you run your computation on all possible inputs at once and also calculate then all of the possible output values. And that sounds of course very cool, very useful. There's a catch that we will discuss later. So it's not as easy as that, but this is one step off like the power that makes quantum computing interesting. The next thing that is different is that we can on a quantum computer not only run classical functions, but we can also run so-called quantum gates. And the quantum gates, they're different in respect to the classical functions because they cannot only like classical operations like and or act on like two qubits in a predictable way, but they can kind of like act on the whole qubit state at once and also create so-called entangled states, which are really weird quantum states where we can't really separate the state of one qubit from the state of other qubits. So it's kind of like if we want to try to make a small change to one of the qubits in our system, we're also changing other qubits there. So we can never like separate the bits, the qubits out like we can with a classical computer. And this is another resource that we can use in quantum computing to solve certain problems faster than we could with a classical computer. And now the catch, as I said, is that we of course do not we do not want to only make a computation with our qubits, with our qubit register, but we also want to read out the result of our computation. And if we try that, so we make like a computation and we want to measure the state of our quantum register, we have a small problem because, well, the measurement process is actually quite complicated, but in a very simplified way, you can just imagine that God is trying some dice here. And then if we have a quantum vector, a quantum state vector that has like these amplitudes on the left, so a one to a n, then we will pick he or she will pick a state randomly from the possible states and the probability of getting a given state as a result is proportional, as I said before, to the square of the absolute value of the amplitude. So that means we can perform computation on all of the possible input states of our function. But when we read out the result, we will only get one of the possible results. So, and this kind of like destroys at the first glimpse, the utility of quantum computing, because we can do like computation on all states in parallel, but we cannot read out the result. So not a very interesting computer because we can't learn about the output, so to say, or not easily at least. But it turns out that there's actually a way of still using quantum computing to be faster than a classical computer. And the first kind of practical algorithm for a search problem, notably the search problem that we discussed before, was given by Love Grover, who was a researcher at the who's a researcher at the Bell Labs, and who found the Grover algorithm that's named after him, that's basically a search algorithm which can, as we will see, solve the search problem that we have in a much more efficient way than any classical computer could. And in my opinion, it's still one of the most beautiful quantum algorithms because it's very simple, and it's very powerful. And there's also a proof, unlike for other algorithms, like the factorization algorithms from shore, that the Grover algorithm will be faster always than any classical algorithm. So in my opinion, it's a very nice example of really a quantum algorithm that is more powerful than a classical one. Let's see how it works. So there are three steps again in the algorithm. First, we initialize our qubit register, our state vector to a superposition of the four possible output values, so 0, 0, 0, 1, 1, 0, and 1, 0 again, all with equal probability in this case here, or amplitude. Then we evaluate the function on this input state here, and what the function then does. So we made some special encoding here that basically marks the solution of our problem by changing the sign of the amplitude of the corresponding state. So we can see that in the output state here, the 0, 1 state has a sign which is negative, which means that it's the solution of the problem that we search. Still, if you would do the read out now directly, we wouldn't be able to learn anything about the solution because as you can see, the amplitude is still equal for all of the four states. So if you would make a read out now, we would only get like one of the four possible states at random. So we wouldn't learn anything with 100% probability about the solution of our problem. In order to do that, we need to apply another step to so-called Grover or diffusion operator, which now takes this phase difference or the sign difference between these individual quantum states and applies a quantum operator to that that basically transfers the amplitude from all of the states that are not a solution to our problem to the state that is the solution. And for this case with two qubits here, with four possible values, there's only one step we need. And after executing that, you can see that now the amplitude of our solution state is 1, but the amplitude of the other states is all 0. So that's great because now we can just do a qubit measurement and then we will 100% probability find the solution to our search problem. And that's where kind of like the magic of quantum mechanics shows because you can evaluate this function only once. So remember that in the first step, we only call this function once with all of the values in parallel. So from the computational complexity, we are much lower than here the classical algorithm, but still we are able with 100% precision in this case to see which state is the solution to our search problem. And that's working not only for the case of two qubits, but also with larger qubit registers. So for example, if you would take 10 qubits, you would need to execute a few more of these steps two and three. So instead of one iteration, you would need 25 iterations, for example, here, which is still much better than the 1,024 iterations that you would need if you would really look into every possible solution of the function in the classical algorithm. So the speed up here is very good for, so to say, all of the, like, it's quadratic for the solution space. And if you look at the complexity plot again, we can now compare our classical algorithm with the quantum algorithm, the Grover search. And as you can see, the time complexity or the number of evaluations of f that we need is only square root of n, where n is the size of the search space, which shows that we have really a speed advantage here of the quantum computer versus the classical computer. And nice thing is the larger our search space becomes, the more dramatic our speed up will be. Because for example, for a search space with one million elements, we will only have to evaluate the search function 1,000 times instead of 1 million times. So that's quite, so to say, a speed up in that sense. Now, how can we build a system that realizes this quantum algorithm? Here I show the quantum processor that I built with my colleagues at Saclay during my PhD. So if you want more information about this, you should check out my last talk. I just want to go briefly over the different aspects here. So we use the so-called superconducting qubits, transform qubits for realizing our quantum computer or quantum processor. You can see the chip here on the top. It's about one centimeter across. You can see the two qubits in the middle. The other, like, snake-like structures are coplanar waveguides where we can manipulate the qubits using microwaves. And so we use frequencies that are similar to the ones that are used by mobile phones to manipulate and read out our qubits. And if you look in the middle, you can see the red area, which contains the qubits, each qubit itself. And then there's another zoom in here, which contains the actual qubit structure, which is just some two layers of aluminum that have been placed on top of each other and which create, when they're cooled to very low temperature, a so-called superconducting state, where we can use the superconducting phase between these two values, layers to indicate to realize our qubits. There's also a coupler in the middle, so this green element that you see, which allows us to run quantum gates or operations between the two qubits. To use that in practice, we need to put this in a dilution cryostat, which is really, like, just a very fancy refrigerator, you could say. You cool that down to about 10 millikay, so very low temperature, just above the absolute zero temperature. You can see the sample holder here on the left side with the chip mounted to it, so this whole thing is put in the dilution fridge, then it's cooled down to the temperature, and then we can, as I said, manipulate it by using these microwave transmission lines. And what we did is we implemented the Grover search for the two qubits, so we ran this algorithm that I discussed before. I don't want to go too much into the details. The results are obtained by running this algorithm many times, and as you can see, we have achieved not 100% success probability, but over 50% for the most cases, which is like, yeah, not perfect, of course, but it's good enough to, in our case, show that there was really a quantum speeder possible. And if you ask why, okay, why is not 100% probability possible, or why can't we build larger systems with that, or what kept us from, for example, building a 100 or 1,000 qubit quantum processor? Well, there are several things. There's, of course, that we have, like, we make errors when we manipulate the qubits, so the microwave signals are not perfect, for example, so we introduce small errors when, like, making two qubit and single qubit interactions. We also need a really high degree of connectivity if we want to build a large-scale quantum computer, so if every qubit is connected to every other qubit, for example, that would make one million connections for 1,000 qubit processors, which is just, on the engineering side, very hard to realize. And then, also, our qubits have errors, because they can, the environment that the qubits are in, like the chip and the vicinity there, also introduces noise that will destroy our quantum state and that limits how many operations we can perform on a single qubit. So there's a possible solution, which is the surface code architecture, which was introduced in 2009 already, actually, by David Vincenzo from the Eulish Research Center, and the idea here is that we do not have a quantum processor with full connectivity, so we do not connect every qubit to every other qubit. Instead, we only connect the qubit to its four neighbors via so-called tunable coupler. And this is, of course, much easier, because you don't need so many connections on the chip, but it turns out that you can still run most of the quantum algorithms that you could also run with a fully connected processor. You just have to pay, like, a penalty for the limited connectivity. And the nice thing is also that you can encode a single logical qubit, so a qubit that we want to do calculations with, as, for example, five physical qubits, so all of these qubits here that are on the chip would together form one logical qubit, which would then allow us to do error correction. So we can, if there happens some error with one of the qubits, for example a relaxation or a defacing error, then we can use the other qubits that we prepared in exactly the same way to correct this error and continue doing the calculations. And this is quite important, because in the superconducting qubit systems there are always errors present, errors present, and we will not probably be able to eliminate all of them, so we need to find a way to correct the errors while we perform the computation. Now the Google processor follows the surface code approach. Here I show an image from the Nature article, which was released, I think, one months ago. So it's a very impressive system, I find. It contains 53 superconducting qubits, 86 couplers, tunable couplers between those qubits, and they achieve a fidelity, so the success probability, if you like, for performing one and two qubit gates, which is higher than 99 percent. So this is already pretty very very good and almost enough fidelity to realize quantum error correction as I discussed before. And with the system you can really run quite complex quantum algorithms, much more complex than the ones that we run in 2012. So in the paper, for example, they run sequences with 10 to 20 individual quantum operations or quantum gates. And just to give you an impression of the cryogenic engineering and microwave engineering here, this is, so to say, the delusion cryostat where the qubit chip is mounted, and you can see that it's quite a bit more complex than the system we had in 2012, so it really looks way more like a professional quantum computer, I would say. And if you ask a physicist now, why would you build such a system? The answer would be, of course, well, it's awesome, so why not? But it turns out that if an organization like Google gives like 100 or 200 million US dollars for realizing such research, they also want to see some results. So that's why, and the team, of course, under John Martinis tried to use this quantum processor for something that shows how powerful or that, so to say, it can outperform a classical computer. And this sounds easy, but actually it's not so easy to find a problem that is both doable on this quantum computer, which has like 50 qubits and a bit more than 50 qubits and like 80 couplers, but it's not possible to simulate on a classical computer. So you could think, for example, about the factoring of numbers into prime components, which is, of course, always like the motivation of certain agencies to push for quantum computing because that would allow them to read everyone's email, but unfortunately both the number of qubits that you would require for this and the number of operations is much too high to be able to realize something like this on this processor. The next thing which would be very interesting is the simulation of quantum systems. So if you have like molecules or other quantum systems that have many degrees of freedom, it's very difficult to simulate those on classical computers. On a quantum computer you could do it efficiently, but again, since the Google team did not do this, I assume the quantum computer was just or they didn't have like a feasible problem where they could actually perform such a simulation that would not be not be performable or like calculable on a classical computer. So, but in the near term in the future this might actually be very relevant application of such a processor. The last possibility would be to run for example the search algorithm that we discussed before, but again for the number of qubits that are in the system and the size of the search space, it's still not possible because the algorithm requires too many steps and the limited coherence times of the qubits in this processor make it impossible to to run this kind of like algorithm there, at least to my knowledge. So, what what they did then was therefore to perform a different kind of experiment, one that was doable with the processor, which is so-called randomized benchmarking. And in this case what you do is that you instead of like running an algorithm that does something actually useful like a search algorithm, you run just a random sequence of gates. So, you have for example your 53 qubits and then you run first like some single qubit gates, so you change the qubit values individually, then you run two qubit gates between random qubits to create like a superposition in an entangled state and in the end you just read out the resulting qubit state from your register. And this is also very complex operations, so you really need a very high degree of like control of your quantum processor, which the Martinez Google team was able to achieve here. It's not it's just not solving a really practical problem yet, so to say. But on the other hand it's a system or it's an algorithm that can be run on the quantum computer easily, but which is as we will see very difficult to simulate or reproduce on a classical system. And the reason that it's so difficult to reproduce on a classical system is that if you want to simulate the action of these quantum gates that we run on the quantum computer using a classical machine, a classical computer, then for every qubit that we add roughly the size of our problem space quadruples. So you can imagine if you have like two qubits then it's very easy to simulate that. You can do it on like your iPhone or like your computer for example. If you add more and more qubits though you can see that the problem size becomes really really big really fast. So if you have like 20 qubits, 30 qubits for example, you cannot do it on a personal computer anymore. You will need like a super computer and then if you keep increasing the number of qubits then at some point in this case 50 qubits or 53 qubits it will be impossible even for the fastest super computers that we have right now. And that's what is called a so-called quantum supremacy regime here for this randomized gate sequences which is basically just the area here on the curve that you see that is still doable for this quantum processor that Google realized but is not simulatable or verifiable by any classical computer even like a super computer in a reasonable amount of time. And if we can run something in this regime here it proves that we have a quantum system that is able to do computation which is not classically reproducible. So it's something that really can only be done on a quantum computer and that's why running this kind of experiment is interesting because it really shows us that quantum computers can do things that classical computers cannot do even if there are for the moment not really useful. And the gate sequence that they run looks something like this so we can see again like here five four of the qubits that the Google team has and they run sequences of operations of different lengths then perform a measurement and then just sample the output of their measurements. So what they get as a result is a sequence of long bit strings so zeros and ones for each experiment they run and to reproduce the to check that the quantum computer is actually doing the right thing you have to compare it to the results of a classical simulation of this algorithm. And that's of course a problem now because we just said that we realized the quantum computer or quantum processor which is able to do this computation on 53 qubits and that no classical computer can verify that so the question is now how can they prove or show that what the quantum computer calculates is actually the correct answer that he does not just produce some garbage values and that's a very interesting question actually and the way they did it here is by extrapolation so instead of for example solving the full circuit so that contains all of the connections and all of the gates of the full algorithm they created simplified circuits in two different ways so for example they cut they cut some of the connections between the qubits in the algorithm so that the problem space would become a bit smaller or in the other case with the alited circuit they just change the operations in order to allow for some shortcuts in the classical computation or classical simulation of the algorithm so in both cases they were able to then verify the result of the quantum computation with this classical simulation performed on a supercomputer and then they basically just did this for a larger and larger number of qubits they plotted the resulting curve and they extrapolated that to the supremacy regime to see that okay based on the error models that they developed based on the simulation they can with a certain confidence of course say that probably the quantum computer is doing the right thing even in the supremacy regime even though we can they cannot verify it using their classical simulations and in case the quantum computer did wrong still they have also archived the results so in maybe 10 years when we have better supercomputers we might be able to just go back to them and then verify them against the 53 53 qubit processor here by which time of course they might already have like a larger quantum processor again so the key results of this I would say are that for the first time they show that really quantum computer can beat a classical computer even though it is at a very artificial and probably not very useful problem and what the experiment also shows is that and really I would say an astounding level of control of such a large scale or medium-sized quantum processor because even five years ago or six years ago 2012-2013 the systems that we worked with mostly consisted of three or four qubits and we could barely fabricated chips that manipulate them to get like algorithms running and now if I see like a 53 qubit processor with such a high degree of control and fidelity there I can really say that it's really an amazing progress in the last five years that was achieved especially by the googled Martinez team here and I also think it's a very good milestone on the way to fully work on quantum computer because it nicely shows the limitations of the current system and gives a good direction on new areas of research for example in error correction where we can improve the different aspects of the quantum processor the research has also been criticized from various sides so I just want to like iterate a few of the points here one of the criticisms is of course that it doesn't do anything useful so there's really no applicability of this experiment and while that's true it's of course very difficult to go from like a basic very simple quantum processor with two qubits to a system that can really factorize prime numbers or do anything useful so we will always need to find problems that are both hard enough so that we can solve them in a reasonable time frame a couple of years for example that still prove the progress that we make on the road to quantum computing so in this sense while quantum supremacy does not really show anything useful in terms of computation that is done I think it's still a very good problem as a benchmark for any kind of quantum processor because it requires that you have very good control over your system and that you can run such a number of gates at a very high fidelity which is really currently I would say the state of the art the research they also took the authors also took some shortcuts for example they used like two qubit quantum gates which are not as we call them canonical gates which might be problematic because if you want to run a quantum algorithm on the system you need to implement certain quantum gates that you need for that and since they only have like non-canonical gates here which are still universal by the way they could not do that directly but with some modification of the system it should also be possible and the last criticism might be that okay here you have a problem that was engineered to match a solution which is of course that okay we need some solution as I said some problem that we can realistic itself on a such a system I think though also like the other points if you want to build a large-scale quantum process so you need to define reasonable milestones and having such a benchmark that other groups for example can also run that process against is a very good thing because it makes the progress visible and also makes it easy to compare how different groups or how different companies or organizations are are at competing on the number of qubits and the control they have about them so if you want to make a more kind of Moore's law for quantum computing there would be several possibilities that you could do here I show you for example the number of the qubits that have been realized for superconducting systems over the years this is of course incomplete because you could like the number of qubits alone doesn't tell you much about your system I mean we could do a qubit chip with 1,000 or 10,000 qubits today but if you don't have the connectivity and don't have the controllability of individual qubits then this chip wouldn't be good so there are other things that we also need to take into account here as I said just as like the coupling between individual qubits and the coherence time and the fidelity of the qubit operations so this is really just one one very small aspect of this whole whole problem space but I think it shows nicely that in the last years there was really tremendous progress in terms of the power of these superconducting systems because the original qubit which was developed at NEC in Japan by Professor Nakamura was done in like around 2000 so had very very bad coherence time very bad properties but still it showed for the first time that you could coherently control such a system and then it didn't take long for other groups for example the quatronics group in Sir Clay to pick up on this work and to keep improving it so after a few years we already had qubits with a few hundred or even a microsecond of coherence time which was like three orders of magnitude better than what we had before and there were other advances then made by groups in the US for example in Yale the Shirkopf lab which developed new qubit architectures that allowed us to couple the qubits more efficiently with each other and to again have better control over manipulating them and then there's also groups like the research group at IBM or companies like Rigetti that took again these ideas and that added engineering and their own research on top of that in order to make the systems even better so in 2018 we already had systems with 17 or 18 qubits in them and now with this Google and UC Santa Barbara work we have the first systems with more than 50 qubits after not even 20 years which I think is quite some progress in this area and of course if you ask me how close we are to an actually working quantum computer it's still very difficult to say I find because we prove in a group prove the quantum supremacy for this randomized algorithm but in order to do something applicable or useful with such a quantum system I think we need like at least again 50 maybe to 100 additional qubits and a larger number of qubit operations but it's really hard to say that's why I also say don't believe in this chart because there's also of course a lot of work in the theory of quantum algorithms because up to now we are still discovering new approaches of doing quantum simulations for examples and right now there are a lot of research groups that are looking for ways to make these medium scale quantum computers so quantum computers with 50 or 100 qubits already useful for the use in quantum simulations so it's really an interplay between what the theory can give us in terms of quantum algorithms and what in terms of experimental realization we can build as a quantum processor so in my opinion quantum simulation will definitely be something that where we will see the first applications in the next I would say three to five years other things optimizations I have to admit I'm less an expert in I think they're a bit more complex so we will probably see the first applications in those areas a bit later and the big motivation for like the tree ladder agencies always is of course the factoring the breaking of crypto systems which is the most challenging one though because in order to do that you would both need very large numbers of qubits so at least 8,000 qubits for an 8,000 bits RSA key for example and you would also need a very large amount of qubit operations because you need to run this shore operation and that involves a lot of steps for the quantum processor and this is so to say the most I would say from my perspective unrealistic application of superconducting quantum processes in the next year but I think if somebody would build a quantum computer maybe we would also not just know about it so who knows so to summarize quantum computers or quantum processes are getting really seriously complex and very impressive so we have seen tremendous progress in the last five years I still think that we are like five years away from building really practical quantum computers and there are still some challenges for example in error correction and the quantum gate fidelity and then again general architecture of these systems that we need to overcome and there might also be some challenges which we haven't even identified yet which we might only encounter at a later stage when trying to build really large scale quantum processors and as a last point I just want to stress again that quantum computing research is not only done by Google or by IBM there are a lot of groups in the world involved in this kind of research both in theory and in experiment and as I said before a lot of the breakthroughs that we use today for building quantum processes were done in very different places like Japan Europe USA so it's really I would say global effort and you should also when you look and when you see this marketing or PR that companies like Google and IBM do maybe not believe all of the hype they're creating and keep and down to earth view so to say of the limits and the potential of quantum computing so that's it and I would be happy to take on your questions now and if you have any feedback there's also my Twitter handle there and my email address and I think we also have some time for questions here right now thank you thank you Andreas we have almost 20 minutes for Q&A if you're leaving now please do so very quietly and if you can avoid it just don't do it thank you okay Q&A you know the game there's eight microphones in this room so just queue behind them and we will do our best to get everyone sorted out sequentially we will start with a question from the internet thank you do you have information about the energy consumption of a quantum computer over the calculation power ah yeah that's an interesting point I mean for superconducting quantum computers there are like several costs associated I think right now the biggest cost is probably of keeping the system cooled down so as I said you need very low temperatures 20 or 10 millikelvin in order to achieve that you need the so-called delusion cryostat and these systems they consume a lot of energy and also materials like helium mixtures which are expensive and like maybe not kind of like rare material right now I think that would be the biggest consumption in terms of energy use I honestly don't have so much of an idea I mean the manipulation of the qubit system is done via microwaves and the power that goes into the system is very small compared to any of the power that you use for cooling the system so I would say for the foreseeable future the power consumption should be dominated by like the cooling and the setup cost and the cost of the electronics as well so the classical electronics that controls the qubit which can also be quite extensive for a large system so the qubit chip itself should be very should be really negligible in terms of energy consumption thank you microphone number one please hello I have a question in regards to quantum simulation so I would have thought that with 53 qubits there would already be something interesting to do since I think the border the limit for more or less exact quantum chemistry calculations on classical computers is that there are 10 to 20 particles so is there a more complicated relation from particles to qubits that's missing here or what's the problem yeah so in the paper I couldn't find in the exact reason why they choose this problem I think there are probably two aspects one is that you don't have in the system the like arbitrary qubit control so to say so you cannot run like any Hamiltonian or quantum algorithm that you want you are like limited in terms of connectivity so it's possible that they were not able to run any quantum algorithm for simulation which was not easy to run also on a classical system you know so but I'm really not not sure why they didn't I think just if they would have have had this chance to do a quantum simulation they would probably have done that instead because that's of course more impressive than randomization or randomized algorithm so because they didn't do it I think it was just probably too complicated or not possible to realize on the system yeah okay so it's this but again I don't know for sure yeah thank you yes and also speaking as a sometimes quantum chemist you can't directly map qubits to to atoms they're not two level systems and you don't I mean you usually also simulate electrons and not just the atoms but I'm not a speaker we can discuss later microphone number two please hi thanks can you compare this classic or general quantum computer to the one by d-wave that's one of the quantum computers by aws offered they have 2000 qubits or something they say yeah that's a really interesting question so the d-wave system is the so-called adiabatic quantum computer to my knowledge so this in this case the the computation works a bit differently it's with the normal with this quantum computer that google produced you have a gate sequence that you run on your input qubits and then you get a result that you read out with the d-wave system it's more that you like engineer like in Hamiltonian which is also which also consists of local interactions between different qubits and then you slowly change this Hamiltonian in order to like change the the ground state of the system to a solution of a problem that you're looking for so so it's a different approach to quantum computation they also claimed that they can can achieve or that they achieve a quantum supremacy I think in a different way for like an optimization problem but to my knowledge the proof they have is less rigid probably than what the google group produced here so but again I'm not like an expert on adiabatic quantum computing so I'm more like gate-based person so yeah I think though the proof that here google showed is more convincing in terms of like reproducibility and really like the the proof that you're actually doing something that cannot be done on a classic computer yeah thank you yeah T-Wave will see that differently I think though yeah all right let's go to the back number seven please hello seven you just wave to me hello I was reading that earlier this year IBM released the first commercial iq one system or whatever the name is and you were mentioning before to keep our expectation down to earth so my question is what kind of commercial expectations is IBM actually creating so I spoke to some companies here in Germany that are collaborating with IBM or D-Wave or google as well and ask what they're actually doing with the quantum computers they are the companies offer and I think the answer is that right now a lot of commercially a lot of companies are investigating this as something that could potentially be very useful or very relevant in five to ten years so they want to get some experience and they want to start collaborating I don't think at least I don't know any reproduction use of these systems where the quantum computer would do some calculations that would not be doable on a classical system but again I don't have a full overview of that I think now it's mostly for experimentation and for getting to know these systems I think the companies or most of the customers there probably expect that in five years or ten years the system will systems will really be powerful enough to do some useful computations with them as well thanks all right the internet please with the quantum computer you can calculate things in parallel but there is this reversibility requirement so how much faster is the quantum computer at the end of the day yeah it's true so that if you want to if you want to realize classical algorithm you have to do it in a reversible way but to my knowledge you can from an efficiency perspective implement any classical non-reversible algorithm as a reversible algorithm without loss in complexity so you can have also like for reversible computation you have universal gates like the control not gate that you can use to express any logic function that you require you might need some additional qubits compared to the amount of the classical bits that you need for the computation but in principle there's nothing that keeps you from implementing any classical function on a quantum computer in terms of actual run time of course it depends on how fast you can run individual operations so right now a single qubit operation for example on this Google machine takes about I think 20 to 40 nanoseconds so in that sense the quantum computers are probably much slower than classical computers but the idea is anyway that you do only really the necessary computations that you can't do on a classical machine on a quantum computer and anything else you can do on a normal classical system so the quantum processor in this sense is only like like a co-processor like a GPU in that sense I would say all right microphone number four please on the slide that shows Richard Feynman you said that quantum computers were invented to simulate quantum systems and can you please elaborate on that that you went past yeah so I don't have to link to the lecture here unfortunately the link is broken but you can find that online it's a 1982 lecture from Feynman where he discusses like how you would actually go about simulating a quantum system because as we have shown like if you want to simulate a full quantum system you need to simulate the density matrix of the system and that takes about that takes an exponential amount of memory and computation in terms of like the number of qubits or quantum degrees of freedom that you want to simulate and with a classical or a Turing machine you couldn't do that in an efficient way because every time you add a single qubit you basically quadruple your computational requirement and that's really where the idea came from I think from Feynman to think about a computing system that would use quantum mechanics in order to be able to do these kind of simulations because he saw probably that for large quantum systems it would never be possible to run at least with our current understanding of classical computing it would never be possible to run a quantum simulation of a quantum system on a classical computer in an efficient way does it answer the question okay all right microphone eight please as a physicist who's now doing analog circuit design I'm kind of wondering why all the presentations about quantum computers always use state zero and one and not multiple states is that a fundamental limitation or is that just a simplification for the sake of the presentation so you mean why you don't use like higher qubit states or like multivalued logic or even continuous states so in principle the quantum bits that we're using they don't they're not really two level systems so there is not only the level zero and one but also level two three and so on you could use them of course but the computational power of the system is given as the number of states so like m for example raised to the power of the number of qubits so m to the power of n so in that sense if you add like another state you only change like like a smaller factor than adding another qubit so it's usually not very interesting to add more states what you would do instead is just add more qubits to your system and for continuous variable quantum mechanic quantum computation I think there are some use cases where this might outperform like the digital quantum computers especially if you can engineer your system to like mimic the Hamiltonian of the system that you want to simulate so I think in this sense in these cases it makes a lot of sense for other cases where you say okay you want to run a general quantum computation then like such a digital quantum computer is probably the best solution and you could also just to add that run like a continuous simulation of a quantum system on such a gate-based quantum system just like the linearly the same order of complexity I would say does it answer the question I think I delude myself to have understood that the non-diagonal elements in the density matrix grow much faster than the number of states in any any diagonal matrix element I guess you could say it like that yeah I have to think about it all right number three please what do you have to say about the skepticism of people like Gil Calai that claim that inherent noise it will be a fundamental problem in scaling these quantum computers I mean it's a valid concern I think as of today we don't have even for a single qubit shown error correction there are some first experiments for example by the Schulkopf lab in Yale where they showed some of the elements of error correction for a single qubit system but we haven't even managed today to like keep a single qubit alive indefinitely so that's why I would say it's an open question it's a valid criticism I think the next five years will show if we are actually able to run this quantum errors and if our error models themselves are correct because they're only correct for certain errors or if there's anything else that keeps us from like building a large-scale system yeah so I think it's a totally valid point yeah microphone five please there has been a study on factorizing on adiabatic machines which requires log squared n qubits while show requires n log n but as the adiabatic systems have much higher qubit numbers they were able to factorize on these machines much larger numbers than on the normal devices and that's something that never shows up in the discussion do you want to comment on that have you read the study what do you think are adiabatic machines bogus or is that a worthwhile result I'm not yeah as I said like an expert on adiabatic quantum computing I know that there were some like studies or investigations of the D-Wave system like I haven't read this particular study about factorization I think adiabatic quantum computing is a valid approach as well to quantum computing I just I'm not just just not sure if currently like the results were like shown with the same amount of like rigidity or like rigid proofs like for the gate-based quantum computer but yeah I really would have to look at the study to to see that yeah can you maybe quickly say the authors so it's on the record yeah if your mic is still on number five sorry I don't okay no problem think sorry but yeah I don't think adiabatic quantum computing is like and I think adiabatic quantum computing is a valid choice of a valid approach for doing quantum computation as well yeah so I can give you I can search for the authors later and give it to you okay okay that would be great thank you thank you microphone four please what do you say about IBM's claim that Google's supremacy claim is invalid because the problem was not really hard yeah so basically IBM I think said okay if you do some optimizations on the way you simulate the systems then you can reduce this computation time from 10,000 years to like maybe a few hours or so I think it's of course a valid it might be a valid claim I don't know if it really invalidates the result because as I said like the computational power of like the classical systems they will also it will also increase in the coming years right now okay you could say then maybe if we haven't achieved quantum supremacy in regards to like the 2019 hardware then maybe we should just like look at the 2015 hardware and then we can say okay they're probably we achieved that in any case I think the most what's most impressive about this result for me is not like if we are really in the supremacy regime or maybe not that's really the amount of the degree of controllability of the qubit system that this group has achieved I think that's really the important point here regardless of whether they actually achieve the supremacy or not because it shows that these kind of systems seem to be a good architecture choice for building larger scale quantum processes and this alone is very valuable I think as a to guide the future research direction regardless of whether this is actually of they achieved this or not yeah but yeah I can understand of course the criticism yeah okay one thing the article is called quantum annealing for prime factorization appeared in nature in December 18 authors are John Britt mackersky humble and case okay great thank you have a look at that again thanks all right microphone six do you have a short question yeah hopefully it is known that it is not very easy to understand how large quantum superposition goes into microscopic state so into microscopic physical description so apparently there are a couple of things not understood so is there anything you know about when you go to thousand ten thousand million qubits could you expect the quantum behavior to break down are there any fundamental argument that this will not happen or is this not a problem considered recently okay I'm not sure if I fully understand the question it's mostly about like if you say like quantum mechanics has some like scale variants so that if you go to a certain scale then sometime at some point you have like irreversibility or like something like that yeah I mean I think there are large quantum systems that occur naturally I don't know like Bose-Einstein condensate for example has a lot of degrees of freedoms that are not controlled of course but that are also quantum mechanical and there it seems to work so personally I would think that there's no such limit but I mean who knows it's like experimental physics yeah so we will see as if we reach that but from like the theory of quantum mechanics right now there's no indication that there should be such a limit yeah to my knowledge all right so maybe we will see you again in five years so please thank Andreas once again thanks