 Daj. Čakaj razovitaj za občasno. I ja sem očin, da sem lačen. Vzlušo, da sem bilo način, da njah napravil, da so počujem, da bi se našliče, da se je najlepšina, da se težave izgleda, da se zelo počujem. Ko sem počujem, da se zelo počujem, da se zelo počujem. Zgledaj so, da sem početila to nam. So, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Na vsečenem spomenu je decidevno automata. Spečno je teorija komputera komputera. Očiloziej posobiločno napravile. Even though it's very formal, I mean for a physicist, it's a little bit orthogonal to the way of thinking that we have been, has been given to us while we were growing up in college. But it's very useful because more than probably, even other more concrete branches of mathematics allows you to really think about the logic behind the whole thing. And so why did I get interested in this theory of computation? Well, I mean, there are several reasons for that. Of course, coming from physics, it was very appealing to me the practical implications. Practical implications of understanding whether something is computable or not is essentially how difficult it is to make a computation. It tells you a lot about the physics of the problem. So for example, when we do Monte Carlo simulations in physics, there are some problems which are inherently easier than others. You write the same algorithm, same Monte Carlo code. You run it on a ferromagnet, and it runs smooth, and you get numbers out, you can do financial scaling, so on and so forth. Then you take the same code, you run it to find the properties of spin glasses, and the same code stumbles. And you ask yourself, why is that? Is there a deep reason for this to be true? Is it just by chance? Can I write a better code? Can I ever hope to write a code that works for spin glasses, as well as my five line codes of Monte Carlo works for ferromagnets? And when you start thinking deeply about this kind of questions, you end up thinking about computer science. And you realize that there is a lot about computer science that is about physics, and not only mathematics. It's going on in a physical object. And the same thing if you try to do, you know, you want to understand the physical properties of a quantum system, then forget about the Monte Carlo code. Sometimes we can write a quantum Monte Carlo code. For some problems, you cannot even write a quantum Monte Carlo code. And you ask yourself, how does nature solve this, right? Is there inherently something more powerful in quantum mechanics that there is in classical computers? Can we use this? And the other motivation that led me to work on this thing is that sometimes when you try to understand what you can do with the physical systems, this brings importance to the system itself. For example, topological quantum memories brought a lot of attention to topology, implications of topology for condensed matter, for example. And this is an interesting topic. I mean, the Nobel Prize was given two years ago about this. So thinking deeply about this question, these questions are questions about nature. They're not questions about how our brain works, but that's also a question about nature. But these are really questions about things which happen in real objects. So it's as much a question about physics as it is about math. OK, so much for an introduction. I think I spent enough. So first of all, so what does it mean? We're not going to go through all of this. These are my notes for the class that I was teaching in SISA until last year, actually, about quantum computation. So I'm only taking some. But if you want, I can give you. If you are interested, I can give you notes to read and things to teach. And so since the audience is very broad, although I targeted this a little bit towards mathematicians, if you have questions in general about what I'm saying, you can just. So the first thing I want to do is to give you a precise idea of what it means to do a computation. And the simplest computation is that which answers either yes or no. OK, so you're given a problem. Here you ask whether this problem has a yes or no answer. There are computational models. Computational models become, sometimes they are simple and they become more complex until we reach to what we think is the most universal computational model that we have that is the Turing machine. But the simple computational model that you should have in mind is, for example, it's called the finite automaton. So in the finite automaton we have a set of states. We have an alphabet. Usually this is just a Boolean alphabet. And we have a transition function, a function which takes a letter in an alphabet. The state the automaton is, and then makes the automaton go to another state. The idea is that there is a register. I read the register 0, 1, depending on whether I read 0 or 1. And depending on the state which the automaton is, I do something else. And then there is a special state. There is an initial state, q0. And there are some, there is a subset of states, but you can even think of one of them, which are called accepted states. So if at the end of the computation my automaton is one of the accepted states, my string is accepted. It's said to be part of the language of the automaton. All right. So let me give you a quick example. For example, the machine M2 is made of two states, the Boolean alphabet, the transition function that I will now specify, q0 is equal to q1. And the set of final state is just q2. Now the transition function delta, depending on what it reads, and depending on the state you are, goes to, all right. So now you have the machine and I have to run it on a string. So for example, let's run it on the string x, which is 1, 1, 0, 1. So what do I do? So this is the idea. So I read 1 and I go to q2. I read 1 and I go to q2. It depends on where I was, right? So I read 1 and then I go to q2. I read 1 and I go to q2. I read 0 and I go to q1. I read 1 and I go to q2. The computation ends and I finish in q2. q2 is an accepted state, so my string is accepted. So x is in l of m2. This is the idea of what the computation is for an automata. All right. Yes. Yes. The initial state is in this case is not important because it does always the same. In general, it is. So this machine, what does this machine do? Well, simply, I mean, look at this. I mean, it's simple. If the last digit is 1, it's going to be accepted, the string. Otherwise, no. So the language l of m2 is the language of all the bit strings, which end with 1. This is a particularly stupid example, so we can make more complicated examples. So language is a subset, is a set of bit strings. Is a subset of the set of all strings, if you want. And a language is called regular if there is a finite automaton which decides it. So the language of bit strings which end with 1 is a regular language. This is a useful definition because it does not encompass all of the languages, which means that not all the things that we think we can do with algorithms are just finite automata. OK. So this actually, there is something interesting. This was just to give you an idea of what the computation is. What doing a computation means. There are more complicated models of computation. One of these is particularly important. It's called a push-down automaton. It's called PDA. These are called FA, finite automaton. PDA is exactly the same thing, except that in addition it has an extra memory on which it can push bits and extract just the last bit. Push and extract, push and extract, push and extract. So which languages can be decided by push-down automaton? Clearly all the regular languages plus more. And these more are called context-free languages. And these are important. They were invented by Noam Chomsky. And they are important because human language is a context-free language. So studying context-free languages allows you to parse. I mean, when you have a sentence and you want to parse a sentence, say, OK, this is the verb. This is the subject. This is the object. The object can be itself a sentence. I like doing this sentence. So the object is itself a sentence. So PDA decide context-free languages. And these are very, very important in linguistics. And it's the reason why Noam Chomsky is, I think, the second most cited author in the world. Because this generated a lot of things. It's not only a political activist. He was actually a very good linguist from studying these things. OK. So by the way, finite automata, when you go to the supermarket and there are sliding doors, what's deciding whether it has to open the door or not? There is not just a simple set. There is a little circuit. Because you want to open the door. But if you are on the other side, it has to close. So that can be done with a finite automata. You don't need the full power of a Turing machine to decide whether to open the door. You are entering or you are exiting. OK. So no, the set of states will always be finite. The alphabet will always be finite. And in fact, the most general alphabet you can choose is the Boolean alphabet. OK. You can reduce everything, which is, if you have three states, you cannot decide more languages than if you have two states. It's finite in the sense that it doesn't have a memory. Turing machine has something which is infinite. And now I'll tell you. It's the memory of the Turing machine is infinite. OK. With the PDA, there is an extra set here, which is the register. And the transition function has an extra slot that tells, if you read this, you are in this state, then write this on the register. That's predetermined. Everything is predetermined. This is an algorithm. OK. You give the algorithm before giving the input. That's the idea. You give the algorithm something, you put the input and you see how the algorithm runs on the input. Just like what I said before, I write my Monte Carlo routine. I run it on a ferromagnet and I find the critical exponents for physics, the critical exponents to three digits. Great. The same algorithm I put as an input the Hamiltonian of a spin glass, it doesn't converge. It takes forever to converge. Why? All right. Now, let's look at another model of computation, which is the Turing machine. OK. So we have the alphabet. Well, here there is a slight distinction because there is one set that we want to be only part of the, so there is an alphabet and there is an external alphabet. OK. Why is this? Because we want a blank symbol. The blank symbol is an element of the alphabet. Oh, s. The blank symbol is an element of the alphabet, but we don't want the input to contain the blank symbol. It's a technical requirement. So, for example, the external alphabet a can be the Boolean 01. OK. Within the set of states, there are two states now, which are called halting states. Q accept and Q reject are elements of the set of states q. And the transition function, it's more complicated. OK. Goes from the set of the alphabet, the set of states, tens of the alphabet in set of states alphabet and then one of these things, minus one, zero plus one. What does it mean? I'll tell you in a second. And then there is a tape. The tape is a function from n to the set of states, meaning that I have a tape. Really, you have to think about the tape, infinite. OK. Your machine has a pointer, which points to a position on this tape, and has an internal state. Depending on the internal state and what it reads here, it changes the internal state, moves left, right or stays put, and then writes something else. So the difference with the automaton is that the tuning machine can actually write on the tape. Now, tuning was... So tuning did this work before actually the computers were built. So what did he have in mind? He had in mind a person doing a calculation which requires a lot of imagination. So, for example, in the 30s, in the 20s, in the 30s, people needed to do integrals, like we do now. Now, we're going on a computer and we write integrate open square bracket or n integrate open square bracket and we get a number. Back then, they had a function, and there were small sections of the departments of physics and math in which you would go and say, look, I need this integral. And the person in charge would take it, say, okay, come back on Saturday. And so you go home and in the meanwhile, the person says, okay, now let's see. What do I do? So let's use this algorithm to do this integral or let's do these other algorithms, bi-section, stuff like that. And follow the algorithm, wrote down number by number, summed all the numbers and got an answer. The professor will come after three days, get the number. And, okay, so what Turing had in mind was this person, which was the person in charge so what Turing had in mind was this person, which was doing the calculation. He has a notebook. He or she has a notebook. And Bryce, the notebook, can erase on the notebook depending on what's written on the notebook, can decide to flip page back or front to stay on the same page. And there is an internal state in his brain, his or her brain that says, okay, if I read this symbol, do this. Good. So the tape can be changed. No, but only at the last point. Only one. Yes. Okay, so this is the definition of the Turing machine. So you have to imagine this thing that moves, takes a register, and writes on the register, and so on. So what can such a Turing machine do? I mean, I'm not asking which languages can it decide. But what are the possibilities? So the possibilities are two, essentially. Other machine stops because during the computation, at a certain point, it enters one of the halting states. Oh, this didn't tell you, sorry. If at any point in the computation, the Turing machine enters one of these states, it exits the computation. The states are in the definition of the Turing machine. The Turing machine is the algorithm. So this is maybe difficult to digest a little bit. And in fact, it's called the Church Turing thesis. The Church Turing thesis is every algorithm is a Turing machine. Now, this is somehow the definition of algorithm, if you want. Because when I talk to you, and I say, oh, I have an algorithm to find the 100 digit of pi. And I tell you the algorithm. If I tell you, I have an algorithm to do something very strange. Find an arbitrary digit of pi, and this will run in two steps. You start thinking, well, actually this is not really an algorithm. It's not what they have in mind as a notion of algorithm. An algorithm is a certain sequence of steps. You can go back. You can do this thing. You can change it internally. So the Turing machine is the best way we came up so far to systematize our concept of algorithm. And also, this doesn't change in quantum mechanics, by the way. Turing machines also play an important role in quantum system. I have an alpha now. Yes, yes, yes. Still at the beginning. OK, now. Good. Turing machine loops, if it never enters during the computation, one of the altering states. If a language is decided by a Turing machine, if for every string in the language, the Turing machine will stop. On every string of the language. So decidable languages can be written as the union of yes and no answers, which do not overlap. And if x is in a yes, then I feed it into the Turing machine and at a certain point, I'm guaranteed that it will enter. The Turing machine will enter an altering state except. If I pick a string in a no, the Turing machine is guaranteed to stop and in reject. Are all languages decidable? The answer is no. So are all tasks algorithmic? No. If I say it like that, it sounds a bit more intuitive. For example, there are several nice examples. The example that is given always is the altering problem itself. Deciding, so deciding whether a Turing machine on a given input will stop or loop forever is not decidable. Altering is the problem in which a bit string consists of a bitwise description of a Turing machine and an input. And x will be in a yes if the Turing machine will stop on the input, x will be in a no if the Turing machine will loop forever on the input. This is a way to show that this altering problem is not decidable, is Kantor's Diagonal Argument, which is the same thing that you use to prove that there are more real numbers than natural numbers. There are more languages than Turing machines because a language is a power set. A Turing machine is just a string. It's an algorithm. It means a string. It's on your hard drive. It's on your hard drive. It's just an integer number. But there is something very interesting. There is an altering problem, which I like a lot in it's Hilbert's 10th problem. Hilbert's 10th problem is the following. I give you a polynomial of a certain degree in two variables, x and y. And I ask you whether this polynomial has a root, which is an integer. I mean, x and y are integers. Is there an algorithm to solve for this problem? This is where came in the 60s by Davis Putnam, Robinson, and Matiasiewicz, which proved that this problem is not decidable. There is no algorithm that, given as input a polynomial with integer coefficients, tells you whether there is a root, which is an integer or not. This is very surprising because if you ask the same question in a polynomial in a single variable, the answer is yes. And even I can give you an algorithm. First of all, if you have a polynomial, by looking at the coefficient, you can tell where all the roots are going to be. There is an interval, you know, by looking at the coefficient, comparing the coefficient of the largest power with the coefficient with the less power, you can say, I know that all the roots of this polynomial are going to be in this interval, x1 and x2. And then I test all the integers in this interval and I see if the polynomial is zero or not. So this is not only decidable, but it's also easy to decide. But now I go to two variables and the problem suddenly doesn't have an algorithmic solution. From my point of view, this means that there is no way of bounding the region in which the roots of a polynomial of two variables are. This can be an unbounded region because it can have things like x2 minus y2. So there can be a big region, infint region, and the algorithm test all the numbers in that region doesn't work because it's not guaranteed to halt. OK. So there are some problems which are decidable, some problems which are not decidable. OK. Amongst the problems which are decidable, there are problems which are easy and problems which are not easy. This brings me to to what I was saying before about trying to compute things for a spin glass, trying to compute things for a ferromagnet. Compute things I mean, compute correlation function of spins, compute whatever, the average energy, compute more complicated things. And these, what simple and non-simple, if you want easy to solve and difficult to solve, if you want to formalize this notion, you are led to define what are called complexity classes. Complexity classes are classes of languages. So given a language, if the strings which are in the language can be infinitely large, I mean, unboundedly large, shouldn't say infinitely large, then you can ask how difficult is it to solve a problem for a large and larger string. And for example, if x as size n meaning that this is a bit string with n bits, then how long assuming that the language is decidable, how long does the tuning machine take to enter the accept or reject state. So if it takes something like a polynomial time, so if the time, the number of steps that the machine takes to decide this problem grows like n squared, actually should put this order n squared, I mean doesn't really have to be n squared, can be 3 n squared, it's the order. So this problem is called, it's said to be in P. This language is in P. P is a class of languages. And all at the same, if there is some alpha such that the time that takes to solve the problem of the input size n is n to the alpha, then this language is in P. Are all languages in P? We don't know. This is one of the million dollar prices, if there is anything outside P. So what do we define? How do we define something beyond P? Well the simplest definition is problems for which a purported solution can be checked in time polynomial. So, given an x, I am presented with a certificate, which also has size, let's say less than n, and I have a machine called a verifier which takes x and c, and if x is in L, yes, and there exists a c such that this spits out one, if x is in L, no, then for every attempted solution the machine gives no and runs in polynomial time. This is the intuition that verifying a problem, verifying the solution of a problem is simpler than solving the problem itself. Sometimes. Sometimes maybe not. But in this case, if the class of problems for which I can give you a solution and you can verify whether the solution is correct, or whatever you give me, I am going to verify that this is going to give me no, then this class is called NP, and clearly, P is contained in NP because if I can solve the problem in polynomial time, I just have to ignore the certificate. It's like when you go to somebody very good and tell him, look, I have this result, these are my notes, and it takes your notes, but it's like this. Yeah, your result is correct without looking at your notes. So, it's, Onsager was supposed to do this. So, when people say, oh, yeah, I have this result. I never published much. So, I said, oh, look, I have these results. These are my notes. Oh, don't give me your notes. Just tell me your results. There is all that. You say, yeah, that's correct. So, P is in NP. The N stands not for non-polynomial, but for non-deterministic polynomial. But I cannot explain. I mean, this is the definition. So, there is a million dollar question whether there is a problem in NP, which is not in P, or vice versa, whether P is equal to NP. We still don't know this. We don't know if it is actually, with this definition, easier, strictly speaking, easier verifying problems that aren't solving them. We don't, and this is actually... You also say that there are problems which are known to be decidable, but we don't know if they are in NP or not. NP is not the end of the story. There are bigger classes. And there are problems which are not even known if they are in NP. So, either... If I'm not mistaken, I remember that there is a hierarchy of classes. There is hierarchy. And we at least know that the innermost class is not equal to the output. No, we don't know that. Because if you can collapse at any level, the polynomial hierarchy will collapse. And perhaps if it was a subclass of... Probably a subclass of... No, but yeah, I mean, logarithmic problem can be solved by deterministic finite automata. But any problem in N squared, I mean, there are problems in N cubed which are not in N squared. And these are very easy to check, to cook up. Halting. Halting in which you restrict your running time of your machine to time to the 2.5. This can be solved in time and cubed, but not in time and square. Because it could be solved in time and square means that it doesn't really matter the restriction of the running time of the machine, then you could solve the halting problem. So, these classes N2, N3, N4 are disjunct. They are disjunct. We know there are more problems in this class N cubed than the running class in square. But whether there are more problems in Np than in P, we don't know. So, for example, which problem is in Np? But there are... Let me give you an example which is subset sum. Subset sum is the following. Subset of integers a1, an. And I want to know if there is a subset s prime contained in s, such that when a sum over the ai contained in s prime, I end the target t and I get a target value. So, these are a set of numbers. I don't know, like 1, 3, 4, 5, 37, 43. And then I give you a target, 157. And I ask you, is there a subset of this? It's called subset sum. Is there a subset of s, such that sums to this target value? This problem is very easy to check, right? Just give me, the certificate is just the set. And I can verify in polynomial time sum in golden numbers that you gave me, actually. And also it's impossible to fool me because if you give me, if there is no such subset, then you give me a purported subset. It's easy for me to check that the sum is not the target. But we don't know of any algorithm which solves this problem in time polynomial in N. We don't know. So, subset sum is in NP. And probably, probably, is not in NP. Yes, it's NP complete. That's why we think it's probably not in NP. So, this actually is a very important notion. NP complete problems are problems for which if you have a polynomial algorithm to solve that problem, you can use that polynomial algorithm as a subroutine to solve all NP problems in polynomial time. So, they are the most difficult problems. OK. It's very hot here. Much better if I stay here. If I move. All right. Now, good. What about quantum computing? OK, 50 minutes. What about quantum computing? A tuning machine can also be written as a circuit. A circuit is a set of gates. OK. There are AND gates. There are NOT gates. You take a, let's do this. NOT this, then NOT this, and then these and these together. AND. And then there is, you know, this and this one. They go into OR. AND. And then they both go into an AND. I don't know. Something like this. OK. So, you put an input, and you get an output. The output can be a bit, single bit, 0, 1, or can be many bits. In that case, the circuit is doing a computation. These are circuits like the CPU is in our computer. They do computation. You put in 64 bits, you get 64 bits. OK. Good. By the way, deciding whether a circuit on a given input is going to give you a yes or no is an NP-complete problem. OK. This is called circuit SAT. Is there an input such that the circuit is going to give you 1 or for every input, the circuit is going to give you 0? This is a difficult problem, NP-complete problem. It's called circuit SAT. Now, good. So, no, no, no, they're not... OK. So, a circuit is equivalent. So, a circuit SAT is a decidable problem. So, it's guaranteed, your tuning machine, your algorithm is guaranteed to end. So, it's not... You see what I mean. Otherwise, you have to have... Because these are... Circuits are directed graphs. You can only go one way. And you always get closer to the end. So, there is no possibilities. So, on the side of... On NP problems, you can write any NP problem for tuning machine as a circuit to decide whether this circuit will give you 1 or 0. These are very complicated problems. So, for example, when people do engineering of circuits at IBM, for example, they have to solve these kind of problems in, you know, to see whether some input can actually... can actually make your computation unstable, because there are... There are many levels of things that you want to check during a real computation on a real circuit. And these are the kind of problems that these people have to solve, which we know that they are NP-complete. OK. What is a quantum circuit? Now, what is quantum computation? So, there is a way of introducing quantum computation, which is based on quantum tuning machines, but that's very old and nobody uses it anymore. Because there is an equivalent definition of quantum computation, in which you only use circuits with gates. And what is a quantum computation? Quantum computation is the following. It's a circuit, OK? It's a circuit in which you can have things like this. Two qubit gates, one qubit gates, controlled gates. There is a big... There is a difference, apparently. So, this is, you know, this is gate u1, this is gate u2, this is gate u3. And there is gate u4 here. And the idea is that you take an input state, OK, which gives you a state, initial state psi 0, which is the product, x1, tensor x2, tensor yada-yada, tensor xn. You apply a set of gates on psi 0 and you get the output. The output is a quantum state. We don't want a quantum state. We want a yes, no answer. So what do we do? We take one of these bits here and we measure it. We measure and we get either 0 or 1. And we get a probability to get 1, for example, and probability 1 minus p to get 0. So the answer to your computation is with probability p1, with probability 1 minus p0. So in the end we are going to have a thing here. So we measure, we take psi and we measure the first bit, let's say. We measure so the probability is the expectation value of a projector. So let's say the following. Let's take this operator, sigma z. Sigma z is 1 on the state 0 and minus 1 on the state 1. And then I measure sigma z of the bit 1. How is this related to this? Well, this is, you know, 1 minus p times 1 minus p times minus 1. So I have to do this. So when I measure, either 0 or 1. In a lab I get either 0 or 1 and I don't get half of 0. But if I do it many, many times then I can estimate this probability p to get 1. Now if, now, what is the answer? The answer is 1 if you get 1 with the probability which is significantly larger than 1 half. Let's say two thirds. Zero otherwise. This is how you do a quantum computation. It doesn't sound much. I know. It also doesn't sound all the strange things that people claim that quantum computation can do. But one has to go and look at the details. So a quantum algorithm is a set of gates and the answer of my algorithm run on a given input is this probability p to get 1. So the analog of the class p is the class bqp bounded error of quantum polynomial. So where is the length of the algorithm? Is the number of gates that they have to apply. There are various measures either the number of gates or the depth. The longest circuit that you can make here. So for example the depth here would be for example 1, 2, 3, 4. Depth 4. These cases. So there are various measures of the depth. Yes. Of course it's probabilistic. Actually I should say that I should have introduced bpp which is the analog randomized class in classical computations but I can't. So the issue is that there are probabilities in quantum mechanics. There is nothing you can do about it. So you're never going to have if you define a class by an algorithm is going to give you 1 or 0 then there is no problem in this class. No languages in this class. But if the answer is did you get 1? So yes you run your algorithm and the probability so the probability that the this thing here that the first bit is up is 1, let's say is larger equal to 2 thirds and the probability and it's in a no if the probability of the first bit being 1 is less than 1 third. I write the input as a quantum state I run the quantum algorithm and then I measure one bit. Yes. So since you have to guarantee that this is not an infinite loop you simply have to envelope this. Oh no, no, no, no. That's not because at the output here if you feed this into another quantum machine which can be even the same machine this is an entangled state it's not of this form. So it's not that you can just it's not running two times on this input. You see what I mean? It's not in a classical machine that will be the same. I can rerun the same algorithm many times, but here I would get something different. I run, I mean this is a unitary I run the unitary square. This is a unitary transformation it's a product of unitary transformations they conserve probability and if I run it twice but a circuit cannot have an infinite loop cannot have a loop it must be a directed graph there must be a direction in which I am going so there must be no loops otherwise I end up into the undecidable. Okay, good. So let me give you to finish let me give you one example So far it seems like you could make things worse? Yeah, exactly. It seems like we have done things worse. We have made things worse. However, clearly P is contained in BQP. Okay. Why? Because these things can just be classical gates. Now the only thing that I have to do usually the classical gates to think about they don't conserve the number of bits, for example hand as two in and one out. This cannot be done in quantum mechanics because you cannot destroy Q bits. You lose probability. For every circuit like this if you enlarge the space there is an invertible circuit which does the same computation. Yeah. No, you always initialize with the product state because you want a string in a language and there is a one-to-one correspondence only between string and product states because the inverse space is a continuum as a vector space, right? So there is a one-to-one correspondence between bit strings and bases of this vector space. But if you don't like initializing like that, then you initialize whatever gates you want for example, one thing that you can do, it's very useful in many algorithms is you want to generate let's suppose you want to generate the state psi which is the completely uniform superposition of all the bit strings. How do we do? This seems like crazy thing to do, right? It's completely uniform superposition. It's actually very easy because this can be written as the tensor product. This is actually a product state. I mean if you take this and you take the tensor sorry, to the n. If you take the tensor product in times of this you get this. By the way, this is not true if you change your phase here. If you change your phase, this is typically not a product state anymore it's a very integral state. So how do you do this? Well, you just apply this h is the Adamard gate. So you initialize with zero state you apply the Adamard gates independently of every spin and you get this. So this actually gives me an easy counter example to when people claim oh well, quantum mechanics is like doing calculations in parallel, no? You don't gain anything when you do calculations in parallel over 2 to the n input. Suppose we do this and suppose we want to find whether there is so let's take a gate which computes a function and then spits out something but the only interesting thing for me the outcome of the computation is the first bit. Suppose our question is would such that this function is equal to one or for every input this function will be equal to zero. Then you might think that if you run this thing here so zero, zero, zero, Adamard, Adamard you get complete superposition here and then here you're gonna get complete superposition of results. Yeah, but they come with a weight one over two to the n. So if there is one state for which f will give me one and all the others will give me zero when I go and measure I have to do order 2 to the n measures in my lab to estimate whether this actually can be one for some x or not but 2 to the n measures is exactly what you would do classically by throwing at random bit strings into a function f and computing f of x. So if this is my problem and this is an n bit string and then I have to find whether this is true or not then if I don't have any information about f of x then I have to just run. So this is not the so it's not true that quantum computers do calculations in parallel. If the efficient quantum algorithms that we have they always use interference here there is no interference how can this they always use interference ok do I have time to give one short example I think I have exhausted everybody let's say 5 minutes ok no more where is Grover ah there it is ok so there are a couple of algorithms for which we know I mean for one for this one I'm gonna show you we actually know that there is no classical algorithm that can do better than this ok and the other one the other ones for example Shor's algorithm we don't know if there is an algorithm which can do factoring in polynomial time in classical setup because factoring is not an NP complete problem factoring is difficult but it's not NP complete by far in fact factoring it's bot in NP and co-NP co-NP means that it's easy to verify also that there is no answer but I don't have time to do this ok so let's assume that we have an oracle function as I said A which takes an input Y and gives 0 and 1 now I want to know this thing here I want to know if there is an X of X is equal to 1 and I want to find what this X is so classically if I don't know what the oracle does that's why I call it oracle I have to call the oracle order 2 to the n times quantum mechanically I have to call the oracle time t which is order square root of 2 to the n and how does this magic works let me so let's if I have this function let's build the unitary which does the following thing it's minus Y if A of Y is equal to 1 and it's Y if A of Y is equal to 0 this is a unitary it's called the oracle once and I can write it by simplicity let's assume that there is a Y let's call it Y0 such that A of Y0 is equal to 1 and my quest is to find Y0 then U is gonna be the identity minus 2 times the projector over Y0 now this gives me the other mark gives me a complete superposition over all the states and if I find the other unitary which is 1 minus 2 times XI this is the reflection in the plane perpendicular to Y0 this is the reflection on the plane perpendicular to XI now if I compose them I get a rotation from Y0 to XI so if I compose them the rotation is a rotation of an angle phi where phi alpha is the angle where phi alpha is the angle the angle between Y0 and XI now in principle this will be difficult so if we knew this angle I would be doing this rotation is easy but we know this angle is a complete superposition over all the strings so irrespective of what this string is the angle is the arc sine of 1 over square root of N ok and that's why actually we get that square root of N there ok anyway so so the idea then so sine of phi alpha is equal to 1 over square root of 2 to the N rotation of phi so let me write it like this e to di phi tau and so if I now do v u so phi this means that phi is 2 times divided by square root of 2 to the N no v u to the pi fourth times square root of 2 to the N is e to di phi alpha tau which takes Y0 and sends it to takes XI and sends it to Y0 ok so v u pi fourth square root of 2 to the N over XI is equal to Y0 plus an error of 1 over square root so this means that if I take this and now I do what you said before I have this u I have this which builds XI then I have u and then I apply this thing square root of 2 to the N times I start with 0 0 0 and I'm guaranteed to exit with Y0 ok you don't know what Y0 is but you can call the oracle you can call the oracle, right? so the oracle is a function which takes an input and says 1 or 0 I don't know I mean somebody gave me this chip I put it in my in my hardware but I cannot look into the chip if I can send to this chip and this chip is quantum mechanically I mean pre-preserved coenance then what you get out you can use it as a subroutine I mean the name of the game is the difficulty of the algorithm is how many times you have to call the oracle and typically I mean in classical with classical computing you have to call the oracle 2 to the N times here you only need to call the oracle 2 to the N times this is called Grover's algorithm and I will stop here ok, so there are two big problems in quantum mechanics in quantum computing practical problem the first is to keep coherence and the second is to keep coherence so the first is to keep coherence in a memory and the second one is to make operations without destroying the coherence so the whole enterprise here is to make if you want it's a technological it's a technological enterprise here I mean we know at the fundamental physics level this should work we should get these things here but building these objects is a nightmare because you want to as you said you want to keep coherence during the computation but also when you do the when you apply the gates you don't want to destroy your wave function there you don't want to make errors in your wave function or entangle with something which you don't control so it's a technological problem it's a big problem so this is actually I wanted to show this to you so I can tell you this so there are two current trends in technology one is using ions in a trap so you have ions in an optical trap and you can engineer this can be in superposition of states the important thing is whether you have something which can be in a superposition of states so ions in a trap can be in a superposition of states so you can manipulate this which are equivalent to applying gates and the second technology which is more condensed matter hard it's superconducting qubits you have superconducting islands and there are various technologies you can do essentially you can make a qubit so in this superconducting islands you can put some charge which is a cooper pair it's made of two electrons and these two electrons can be in a superposition of being there and not being there so using these these two states of a charge we can build a superconducting qubit we can build a qubit and we can make operations on a single qubit two qubits and that's all we need to do universal quantum computation how many there are currently perfectly under control 16 qubits now 16 qubits seems laughable but if you have to simulate a 16 qubit quantum system you need typically to make a number of operations which scales like 2 to the 16 which is what is it 65,000 65,000 about the bristle cone okay these new claims about the bristle cone is the chip that Google is working on which has 72 qubits at the moment they don't have I was Google last week actually and they gave a presentation about where they explained about the problems there are and they claim that they will be able to do a computation on these 72 qubits by the end of the year that cannot be done on the largest classical computer on the earth okay known compute yes I mean 2 to the 72 is really big 2 to the 72 is I don't think there is enough ram on the planet earth to store the wave function 2 to the 72 qubits so you have to store 2 to the 72 double precision numbers and that's a lot so we have trouble simulating 2 to the 40 on clusters so how far are we unclear something will be done with these 72 qubits that cannot be done on classical computer however in order to do everything that we want to do we have to go to orders of magnitudes more than qubits talking thousands millions if we have a system with a million qubit then we can do error correction and we can do arbitrary precision quantum computation which that's another story that's something if you can do that so my idea was to actually give you I think not many of you have actually looked at what the theory machine is before so I really wanted to start from I think I've been too I've been too optimistic about the plan of this lecture by I think a factor of three at least because I mean but I really wanted to tell you what the theory machine is and make you think about quantum computing in terms of how can I do better than a theory machine because you read things on the newspapers which make no sense so they claim it's fast you need to know I mean we are scientists we need to to make a framework where to put the ideas and the theory machine is the framework to compare all the quantum computation yes there was an issue how did they know it's quantum the d-wave no, the brist is completely different d-wave was a different technology it was the adiabatic quantum computation which is something that they haven't done it's a different protocol and it was not clear I mean they had 10,000 qubits 2,000 qubits but I mean they were not just not keeping coherence between more than 7 or 8 of them at the time but still they were claiming speed up with respect to some classical algorithm the current architecture that IBM, Intel and Google are using is completely different they say ok we'll stop at 16 but we'll show you that we can keep coherence on the 16 for as long as the calculation is required we go to 20 we'll show you that this will keep coherence for 20 these 20 qubits for as long as the computation then we go to 30 then we go to 40 then we go to 72 and show you that this will keep coherence in quantum effects up to as long as the calculation is needed it's not let's go to 1,000 and let's see what happens that was useful I have to say being in the community it was useful to get people interested in some sense but nobody thought that it was really a quantum computer really a quantum computer oh, there is a theorem which is actually not even that difficult to prove that adiabatic quantum computation is universal so it's equivalent to the circuit model whatever you can do with the circuit model you can turn it into an adiabatic quantum computation and vice versa yeah, yeah it's a constructive problem yes yeah and make it into an adiabatic quantum computation however the issue is that if you take a circuit which works on 10 qubits with 20 gates you make it into an adiabatic quantum computation of probably 100 qubits and 1,000 gates the parallel is that whatever I can do here but not whatever resources I use here these are the same resources I use here sometimes you have to and actually most of the time in this kind of proofs you have to allow for many, many more resources if you have irreversible computation you can get the same results as reversible computation but you have to enlarge your bit space and it's the same there the circuit model and the adiabatic computation model are the same they are equally powerful but one requires more resources than the other yes