 Okay. So today I will be giving the lecture about quantumness and quantum speedup in de-waves devices. We had in the past a lot of talks about functioning of quantum computers in general, but the de-wave devices are special in their way of functioning, and so I thought I will be focusing on them. First I will give a quick introduction into quantum mechanics for those of you who are the non-physicists, which will be I think most of you. After that I will fix some criteria for a universal quantum computer and the circuit model, which is the model which you will find nearly everywhere in literature and nowadays studies. After that I will go a bit more into the functioning of the de-wave, which is not a universal quantum computer, but an adiabatic one. So I will talk about optimization problems and the simulated annealing algorithm, which most of you will know as it is a classical algorithm, which perhaps some of you have implemented so far. And I will go a bit more into the physical functioning of the de-wave devices. And after that I will come to the three studies I want to present you, meaning one study from 2014, which is about the quantumness of the de-wave device, so whether the coherence is still active for the de-wave or not, because it has a lot of keywords. Much more than universal quantum computers are able to obtain now. And after that I will go into two studies about quantum speedup, one which defines the quantum speedup or gives a general definition of what we can expect by talking about quantum speedup and a study from December 2015, where Google scientists have found some really high speedup factors of 10 to the power of 8 compared to some classical algorithms. And in the end I will also give the conclusion. First of all, a small introduction to quantum mechanics. Most of you will know the image with Schrödinger's cat, with particles which are in two states at the same time, et cetera, et cetera. Here I want to speak about it in a more mathematical term. So physicists normally speak of a superposition of quantum states, meaning you have a superposition of two basis states. So if you think a bit on linear algebra, which is a mathematical framework normally used for quantum mechanics, you can imagine a two-dimensional vector space with the x-axis and the epsilon-axis. For the x-axis you can have a projection to the state vector, some people call zero, this one. And for the y-axis you can identify it with the state vector one here. The thing with quantum mechanics is, as long as there is no physical interaction with the actual system, the system, in this example a two-state system, but it can also be a free-state system or a system in a vector space with non-countable dimensionality, the system remains in the linear combination of these two states, meaning if you implement any operator, any linear operator on the system, so any function you can imagine it will act on both states. It will not act only on one or on the other state but on both states. And when you measure the system, you will project the state onto one of the basis states. And the probability of obtaining one of the basis states is given by the probability amplitude here alpha and beta called, which are complex numbers, but their probability is given by the square. So that you have to get a really real physical meaning of it. You have to implement normalization on the vector space, meaning that all probability amplitudes square will add up to one, because if not you would have more than a likelihood of one for all possible outcomes, which would not really make sense in a physical way. So now when we look on a classic computer, we can ask what exactly is a classic computer. So the most basic thing one can imagine is, okay, we have two bits, zero and one, and we have functions which we want to implement on these bits to perform calculations on them, and every of our functions can be realized by the use of logic gates. And we can say, okay, we can have any logic gate, but how can we prove that we can implement any logic gate? We say we have universal logic gates like NAND, with which we can build any other logic gate. So to prove that the system is a classical computer, we just have to make sure that the NAND gate can be implemented on every bit. So also we have a thing which is a very classical thing, so most of our logic gates are non-reversible, like the NAND gate. And now the question is, how can we implement this on a quantum computer, meaning a two-state system which we have like N times? And the answer is quite easy, linear algebra. We have N of our two-state systems, which we call qubits, and our calculations are implemented by linear operators, which have to be unitary because our development, our calculation is the time evolution of a quantum system, which can only be implemented through a unitary matrix. I can explain you afterwards why it is like that, but it would go a bit deeper because for that you would have to integrate a Schrodinger's equation. So our matrix must be unitary. This allows only for reversible functions, meaning you have to be able to redo them to invert the function. So we cannot implement the irreversible logic gates, only the reversible ones. And now comes the great secret about universal quantum computing. The qubits are in the linear combination of two states, meaning that our computation does not take place on one state or on another state, so it does not take place on one bit or another bit, it takes place on all basis states of the basis we choose, meaning for one qubit we calculate on two-dimensional vector space. If we take two qubits, we have to couple every basis state to every other basis state, meaning for two qubits we have four, but for three qubits we have eight basis states on which we calculate, meaning our vector space or dimensionality doesn't increase linear, how it is the case for normal computers or normal computation, it increases exponential, meaning for n qubits we have two n to the power of n basis states on which we can compute. So this exactly gives the computational power. Quantum computers are in the same complexity class as normal computers, meaning you cannot solve problems on a quantum computer, which you cannot solve on a normal computer, but you can solve some problems, not all but some, much faster, which is the interest of quantum computing at all. So D-Wave works a bit different. D-Wave devices are adiabatic quantum computers, meaning they don't have the logic gates as we know them, they have Hamiltonian, which is implemented onto spins, which you can see here, and on this Hamiltonian we can model an optimization problem, meaning that our device is only able to solve optimization problems, which still have a lot of applications in finance or also in computer sciences, and it solves these optimization problems through quantum time evolution. Physical systems in general, so in chemistry and physics, no matter where, mostly tend to the lowest energy state, they try to minimize their energy. This is also given by the Schrodinger equation, meaning your system will evolve due to this Hamiltonian and will try to go to the global energy minimum. So how can you use this? You take your optimization problem, which you want to solve, you map it to this time-dependent Hamiltonian, you say at the beginning you take a simple problem, a couple this thing, this Hamiltonian to an external energy source here, which you increase with time, and at the end, due to your adiabatic theory, you say that the ground state of the simple problem will go to the ground state of the difficult problem, which you want to solve. If your perturbation, so your change of Hamiltonian, occurs sufficiently slowly. So at the beginning of the computation you say, okay, simple problem, you increase the coupling to the energy source, and hopefully your system will, at the end of the computation, be found in the ground state, which you search for, and then you just have to identify this quantum ground state with the solution of your problem. The idea of quantum annealing and simulated annealing can also be displayed. So we say we have an energy landscape like this, and we want to find this global minimum, but have the problem that sometimes we are trapped in this local minimum here. We have two ways. So due to the coupling with the external energy source, the system within simulated annealing can sometimes cross this barrier by getting thermal energy from the external bath, and then can go to this global energy state and will remain there. The problem is when your barrier is very high, you need to take a lot of energy, which is not so likely. It's a stochastic model. Quantum mechanics has one advantage. With a certain likelihood, a quantum system can tunnel through an energy barrier. You can also formulate it in a different way. A certain part of your probability distribution will be found here. So you have a certain probability to tunnel through the barrier to be found in the ground state, which can give you a speedup if for the classic problem, the likelihood to make this thermal jump is quite low, because the quantum system can use both, can use the thermal jump, but also the tunneling effect. So you can also ask yourself, how can we realize it in a physical implementation? So what I talked about was mostly the theory. Me myself, I prefer the theory because it is universal. You don't have to care how you realize it on the system. For D-Wave, you realize the two states, zero and one, through the current flow in a superconducting ring. So one current flow like this corresponds to the spin down or the one state, and the other one corresponds to the zero state. And you have couplings between the spins, which are realized through these boobins, through magnetic fields. And the external energy source is also represented by an external magnetic field. You can then model your Hamiltonian, which you want to solve, on the system via a Linux interface. And this allows you to change the interaction between the qubits as well as their individual weight. So if you want to have one spin more important than another one, you set the spin, which is more important, to HI equals one, and the other one, for example, to IHA equals 0.5, and stuff like that. So you have a structure, which is called the Schmerer graph. You have always eight qubits in the spacer cell, which are coupled to every other qubit. And then the qubits are also coupled between each other. And you have the external magnetic field, which is also implemented on them. Here's one small technical details. You have the d-wave one with 128 qubits and the d-wave two with 500 and 12 qubits, which is a record because within the building of universal quantum computers, the state of the art is about 5 to 10 qubits. So this number of qubits would give us if it were a universal quantum computer a high computational power of 2 to the power of 128 or 2 to the power of 512, which would give this machine more basis states to compute on than you have atoms in the universe. The only question is, whoever the quantum effects are still active, the problem is the more qubits you couple to each other, the more likely you have decoherence effects, meaning that the superposition or linear combination will be lost, meaning that your system will go to one of the base states and you cannot compute on that. So when we ask about quantum mass of the d-wave devices, we want to ask, are our qubits still in the superposition or not? One thing you don't have to forget is that you have to cool down your system. This is nearly the absolute zero point. Absolute zero cannot be achieved due to the loss of thermodynamics, but this temperature is much cooler than the universe and it is the lowest temperature we can achieve at our time. So it's really state-of-the-art, but you have to cool it to prevent the system from interacting with its environment to maintain the superposition. Okay. So the first study conducted in 2014 by Boixel et al. tried to find out whether the d-wave machine is really quantum. The problem is you cannot open the computer just look to the inside because you would lose your superposition as this would be also a measurement or an observation. So what can you do? You can say, okay, we implement random spin-blast problems. So we just choose randomly some Hamiltonians. We calculate the correct solution on ground state on classical computers to know when we find the good result and then we run a simulation of a quantum system which is called simulated quantum annealing. We simulate classical spin dynamics and we use a classic algorithm which is called simulated annealing which also uses the method I described without the quantum tunneling. We look whether our computer, whether our simulations find the ground state or not, calculate the probability and do the same for d-wave and then we look up for the probability distribution meaning okay, has our quantum computer the same probability distribution like real quantum computer? Is it more behaving like simulated annealing or could it behaving like classical spin dynamics? And so we can test whether our computer is quantum or not. So you see, the d-wave shows by model distribution meaning you have peaks here and peaks here. The same is true for the simulated quantum annealing and the spin dynamics. While the simulated annealing here shows an unimodal probability distribution meaning it's more linear. The spin model probability distribution means you have clear separation into hard and easy problems. Easy problems are those where the ground state is nearly always found so with a probability of nearly one which is here and the hard problems are those where the ground state is nearly never found. Here and here and you see that the d-wave splits the problem nearly one-to-one in hard and easy problems meaning you have nearly as much hard as easy problems. The same is nearly true for simulated quantum annealing where you have some more easy problems than hard problems but it is not true for the spin dynamics where you have a lot of hard problems here but only a few easy problems here meaning that the d-wave will certainly not behave like the classical simulated annealing and is also much less likely to behave like the spin dynamics, the classical spin dynamics but behaves very similar to the simulated quantum annealing here. But of course this alone is not an indicator so they also took the correlation between the methods and to prevent for calibration errors which also occur. They took the average so they made a lot of iterations of every problem and then took the average and to have an auto-control parameter they correlated the d-wave device to itself meaning this is the best correlation distribution you can obtain. On the red diagonal line you have the points which you would have for the perfect probability distribution so it can be read like this for example SQA has one problem with 0.6 probability of obtaining the perfect solution and for this problem the d-wave also has exactly the same probability here but there is another problem for example here for which SQA has a bit more than 0.6 probability but which the d-wave device is nearly completely able to solve and so you can see here for the spin dynamics that you have a lot of problems which are hard for the classical spin dynamics here which are corresponding to these problems and the success probability distribution where the d-wave device still has some chances of solving them and a section of problems which are easy for both methods meaning that the d-wave is in every way much more powerful than the classical spin dynamics compared to the simulated annealing there are some problems for which SQA is much better some problems for which both are equally powerful but the correlation is very weak because the only points which are at least a bit correlated are these ones here the spin dynamics are nearly correlated at all they show a very very very weak correlation but for the simulated quantum annealing the correlation is nearly as good as for the d-wave device itself meaning that our quantumness and the d-wave device are still preserved even if we have a very high number of qubits so the d-wave will most likely behave like the simulated quantum annealing meaning we have a quantum computer the only question now is is our quantum computer also fast so cannot solve optimization problems really better than a classical algorithm before I continue, what do you think? what is your opinion on the theme do you think the d-wave device has a possibility of being faster or not so would you belong to the people who were investing one hundred million dollars into the start-up? yes or no? okay so it's quite interesting because the latest point I heard from the scene was that yeah d-wave you can't forget it they tilled a lot of bullshit they said we are fast but they didn't show any results etc etc which was awesome point of the 2014 study 2013-2014 defining quantum speedup a group of researchers Ronno et al went together they said okay how exactly do we define our quantum speedup and what do we test the d-wave computer against okay so they had like five categories which can be separated in roughly in either the provable quantum speedup meaning it is mathematically provable that our algorithm is faster than any possible classic algorithm you have at the moment two algorithms which are known for which it is the case one algorithm is Schor's algorithm for prime factoring which is the algorithm everybody talks about when quantum computers are said to break ARS-R encryption which uses the superposition and the quantum Fourier transformation the other one is Grover's search algorithm which scales with the root of the problem size and is much faster than any classical search algorithm and could also be used to break symmetric encryption but both are not able to run on the wave device sorry okay for our quantum speedup for the d-wave we could only talk about so-called limited quantum speedup meaning we have a quantum algorithm meaning the quantum simulated annealing or quantum annealing and we search for classical account path which we ran against the d-wave device here you have the problem you have to test all devices experimentally meaning you always have errors within experimentation you have calibration errors whatever and the dependence on the experimental setup the second problem is the chosen algorithm of course we can say we have a lot of quantum speedup if we test for example our simulated quantum annealing against a monkey who is searching or against a very weak classical algorithm but in that case it is not a quantum speedup in that case it is just a speedup of a fast algorithm against a slow algorithm which we also don't want to okay so in the study they again shows a random spin-less problem tested it on the d-wave computer and the simulated annealing as it is very near to the functioning of the d-wave device and the simulated quantum annealing I don't think I have to say a lot of things on it it is also mostly behaving like the d-wave device so they chose the algorithms quite well but still didn't really found a real quantum speedup they said okay we have an indies for a limited quantum speedup in a certain category of problem size but we are not very sure but they also stated some problems meaning they tested the d-wave device without any error correction a problem for adiabatic quantum computing is that with all the spins you have in the device you need very good error correction so adiabatic quantum computing can be universal but only when you have no noise at all and even at 0.2 Kelvin you still have some noise meaning you need some very good error correction to get a better performance second problem is hidden parallelization within the simulated annealing and you can mistake this parallelization for a quantum speedup it is clear when you distribute one problem onto two processors it will be faster than the same problem on only one processor it's clear but that wouldn't be any quantum speedup it would just be parallelization and the third thing is the calibration errors it's an experimental setup meaning you always have some errors within and this study which is cited by I think nearly everybody in the scene as well as in sciences when people say D-Wave is a lot of expensive trash it is failing its goals, we can forget it which was also the last information I had on the theme but in December 2015 I found a paper from Google a study from Google which was cited meaning that D-Wave could be quite fast for some problems and it is this study which woke my interest on this theme so some researchers Denif Boixel at all compared the newest D-Wave device the D-Wave 2x again against simulated annealing but also against another algorithm which is mostly used in high finance which is called Quantum Monte Carlo it's also a classical algorithm even if it has quantum in its name it is one of the best algorithms we know at our time and they wanted to know we have a wall clock time which is the time we need to program the device to read out and to have the annealing but for a theoretical speedup we are only interested in the pure annealing time meaning the time we take for the calculation of the D-Wave device we don't care how long it takes us to program the device, how long it takes us to read it, etc we are only interested in this annealing time to show how fast we can be and it's clear if we have a short annealing time we have a fast calculation and result meaning the short of the annealing time the faster the device and the annealing time is also linear in speedup meaning when we talk about speedup we can as well talk about the annealing time so the results they found a very high speedup factor within the range of 10 to the power of 8 against the simulated annealing and against quantum Monte Carlo meaning the D-Wave device is more than 100 million times faster than the classical algorithms they explained it with the tunnel effect as I even said a quantum particle can tunnel through an energy barrier where simulated annealing has to jump over the energy barrier and we will also see later in the annealing time how it is calculated, etc and this also shows here so for simulated annealing you have the asymptotic speedup meaning the higher your problem size the faster the D-Wave device is against the simulated annealing for quantum Monte Carlo we only have a constant factor speedup meaning the D-Wave will always be by constant factor times faster than quantum Monte Carlo but still in both cases the speedup is quite high so we can calculate the quantum annealing time when we look at the number of coton length qubits here it is exponential in this number and we have some pre-factors bqa and alpha which are determined by the system for the simulated annealing the annealing time is also exponential within the energy the temperature and Boltzmann's constant the higher the energy the longer the process takes but if our barrier is very tall so very very very tall and very narrow so quite high and quite small the advantage of the tunnel effect is very high meaning when the barrier is very thin the particles, the quantum particles are much more able to have a higher probability of tunneling through the barrier and then the barrier is very tall so very great the probability of having enough energy for the thermal drum to cross barrier is quite low meaning in this case the annealing time up to the ground state takes much much longer than the annealing time for the d-wave device and as the exponential function increases faster than any polynomial the constant factor is for n to infinity not really important meaning only the exponential part is important and so you have this asymptotic speedup meaning the more parameters you have in your problem the faster the d-wave device will be compared to the simulated annealing for quantum Monte Carlo it looks a bit different as quantum Monte Carlo also uses the tunnel effect even if it is a classical algorithm it is a classical simulation the coefficient here is equal because the quantum Monte Carlo also uses the tunnel effect but still as the d-wave device is a physical system using real physics for its calculation the pre-factor here is much smaller than the pre-factor here meaning that the quantum annealing time of the d-wave device is still much shorter than the annealing time of quantum Monte Carlo but the speedup is only by this constant factor meaning by BQA divided by BQMC you can also see it in this graph you have approximately the same slope for d-wave and quantum Monte Carlo it is a logarithmic scale meaning the slope is equal to the exponent to this exponent here this slope is equal to this exponent here and here you can see that d-wave and quantum Monte Carlo have the same slope as even said but they have this different factor within the annealing time meaning that even for an increasing problem size the d-wave is faster by the same amount while compared to SA the d-wave gets faster, faster, faster so here you have to look at the difference between the simulated annealing and the d-wave this difference becomes greater the higher the problem size is meaning that the d-wave will always be much faster than the simulated annealing and it is mostly a very high problem size for which you want to use this quantum computer so to my conclusion the d-wave is as even said adiabatic and it is not universal adiabatic is equal to universal only without any noise meaning as long as we don't have good error correction codes we cannot implement Shah's algorithm on the d-wave device we can only solve the optimization problems but it is still quite useful for a very great range of problems we have the b-model distribution and the good correlation for both the d-wave and the simulated annealing meaning that the d-wave will use some quantum effects meaning simulated quantum annealing and its calculations for the speed-up we can say that the quantum tunneling effect allows for a very significant speed-up for the combinational problems by the factor of 1 to the power of 8 but it is very dependent on the algorithm meaning for the SA we have the asymptotic speed-up for the quantum Monte Carlo the constant factor speed-up but for some algorithms we have no speed-up at all so there are algorithms for which a normal laptop is 15 times faster than d-wave which could be a problem at the time but we also have to say that quantum computers are not faster than normal computers in every case they are faster for some problems meaning it is dependent on your problem which device you want to choose and for the algorithms which are always cited for which we do need a quantum computer the d-wave is not able to help them implement it on it because they will lack their recollection so here are my sources the studies are quoted they are also in the wiki with a link to them so you can read for yourself how the scientists concluded their studies, their results, etc and now I'm open for any questions if you have some so your question was which algorithms we can expect to run on this machine every optimization problem you can so every optimization problem you can map to the Hamiltonian you can so the Hamiltonian is equivalent to nearly every optimization problem you can imagine so you can map every optimization problem on it the only question is you have to find a mapping so you have to find a projection between the Hamiltonian and your optimization problem saying this spin is equal to this variable and my couplings and my weights are equal to its coefficients so you have to imagine for yourself which problem you have and how you want to map it on the d-wave on the Hamiltonian other questions? will we find such a cooking book for the mapping? so by the time you don't have to waste so much time to find this mapping okay there I would have to talk to researchers so I'm just a physics specialist student who was interested in the theme and wrote a homework on it that some of the researchers you can talk to over the internet on our VIX where I found the studies you have the email addresses also one mathematician who were the one writing the algorithm which is 15 times faster on a laptop than the d-wave device put his code online on github I could give you the link if you want so I think you would have to talk to the mathematicians but I'm sure so as it is proven that you can map every problem onto this device there should also be a cookbook for it it's just there I would really have to talk to the people but it's an interesting question and I'm really thinking whether I should do so or not of course I think they have email addresses I think you can just write that you're interested in such a thing that you heard a talk on the studies and my interest was also to make these studies a bit more public because on our VIX one is able to see all the blogs which are linking to it all the studies who are referencing to it and these are studies which are quite important I think but which no one pays attention to so I think there were just a few number of blog posts who were linking to it so I think you should be able to just write them a mail and get an answer Welcome I seem to remember that there is a contract for D-Wave orderings basically that states that I think every two years you get twice the capacity in quantum bits so how did they achieve this? Is this something that's trivial and can be scaled I don't know for the next 50 years or is this something that is limited rather soon and they will not be able to make much progress anymore? Honestly concerning that I have no idea so you cannot look within the D-Wave you are when you buy this computer for 10 million dollars you are forbidden to open it and look within it's the second reason why the researchers conducted this study whether it is quantum or not D-Wave promises a lot but as we don't really can look on the inside and find out what is within there we cannot say whether the number of qubits are doubling every year or not whether it is even possible or not so the thing is we have researchers from everywhere in the world who are working with enormous budgets on getting quantum computers to run which are at 10 qubits maximum so the record for for example Schor's algorithm was in 2002 that they factored 15 into its prime factors 3 and 5 this is the problem oh nice so the problem with quantum computers in general is not that the theory is not clear it's the experimental physics which are just quite weak but I can tell you that the critics of D-Wave say they use the wrong approach they say the whole design has some difficulties and said they should not be go into adiabatic quantum computing but should go more into universal quantum computing the third thing is there exists a third concept for quantum computers which is the measurement-based quantum computation meaning that you use measurements to drag the computation forwards and there you could have an approach which allows for universal quantum computing but which as far as I know can also or could perhaps be in the future implemented on D-Wave devices I don't know whether anybody has ever thought on that really I don't know but it could be perhaps a way of getting this into work yes there is a second term in law and it's also for information processing so you need for calculation minimum amount of energy so I think it's independent if I use CMOS or quantum computing or something else so it's minimum energy used for calculation and so is it will quantum computing be somewhere in the future a better choice for some algorithms or is it possible that CMOS will be some years later do everything and we don't need quantum computing so for this one I have to admit I've never heard of CMOS it's a normal computer so I'm not sure so concerning the energy I think you are referring to the equivalence of energy, entropy and information yes it's a signal to noise ratio and you have the thermal noise and the signal energy for the computation and so you can say you have an electronic computer and have a lower signal level and so you can have lower energy consumption but there is a minimum because there will be more errors in computation when the signal level is near to the noise level so really I cannot answer this I would from just an intuitive point think that as your calculations on a universal quantum computer not this adiabatic one but on the universal one are just always different basis choices so you always choose different basis for two times two subspace depends on the product space it doesn't matter so you always just choose basis and project onto basis states that the choice of basis is not an inherent choice of your vector space it's something you do for yourself so I would say that you perhaps lose not lose entropy that you take the energy you are taking within the computation is the readout it's just something intuitive I would say I really don't know it but I would say that it doesn't matter how many computations you do on the thing as the unitary matrices you have there which you use for the computation are just so on the one hand they are just computations within the time second thing is they are operators you use for the quantum time evolution so for the natural evolution of your quantum system you would have them in any case so I think the only and the third thing is that no matter how many of these matrices you are communicating so how many functions you are mapping together when you have a function which projects one qubit on another qubit you can always represent it within one rotation or one matrix meaning I think on the computation itself you don't lose the entropy or gain entropy I think it's only on the readout process but for the measurement based quantum computation there you would if this idea were as I say it right there you would constantly lose energy okay thank you other questions is it fine?