 Tessie. Yeah. So welcome. My name is Gianni. Thank you, Antonio, for a nice introduction. I'm in a strange position now because I feel like this kind of John the Baptist figure in the sense that my presentation is going to be the only one where quantum physics is hardly going to enter into it. As part of the introduction to quantum computing, the organizer decided that it would be useful for people to considering different backgrounds and different stages of different people in their career. It would be interesting to have an introduction from classical computing and figuring out how come we eventually ended up discussing quantum computing. So for some of you, some of the things that I'm going to talk about are going to be old concepts. Hopefully, you won't have seen every single thing that I'm going to present. But this is my contribution to the hackathon in preparing the way for the second speaker to discuss quantum computation models in more details. So without further ado, the story that I want to tell you is a story with theorems. So this is maybe something we could, this is a way that I would like to present. So story starts in 18th century Koenigsberg. It's a city at the time of Prussia. And one of the legend has it that at some point one of the pastimes of the people was the figuring out whether it was possible to take a stroll through the city, crossing each and every one of the seven bridges that straddled the river passing through the town. So this is what Koenigsberg looked like at the time. Very good call. Why do we know these kind of things? So I would still talk about it. Because this particular problem, finding a path that crosses all of the bridges in Koenigsberg, was an example that spurred Leonhard Euler to write one of his most well-known papers. And the problem is that it's about what we now would call Eulerian graphs. So the Eulerian graph problem is I give you a graph and I want you to find a cycle that passes through all of the edges, occupies all of the edges of your graph. And in general, if you were to try to solve something like this, naively you would think that you would have to do some sort of tree search. You start somewhere, you try to cross into one of the edges of one of the bridges, and then you try and figure out you can follow through and until you get stuck. And then maybe you need to backtrack. And the naive approach one would have in solving these kind of problems. But Euler being Euler figured out that, well, for Königsberg, he says in his paper, this might be doable because there's not that many bridges and not that many structures of land. But then he's, by some account, has 121 islands, and it's connected by 435 bridges. So that seems to be way less doable than this. So what Leonhard Euler figured out is that in what we now call an Eulerian path, every time I visit a node, first of all, obviously you can simplify this into a graph. Actually, matters here is this structure. This is the same thing, just simplified here. Every time you visit a node, you also need to leave it. So if you have an Eulerian path, all the nodes need to have an even degree, an even number of edges incident to each particular vertex. And well, maybe this is something we can actually prove. So because what people usually discuss here is that that's a necessary and sufficient condition for being an Eulerian graph. But what Euler actually proved is only the one side of the claim, which means that the fact that people maybe don't particularly know is that this other implication here was also proven, not by Euler though, by some other person nobody remembers. So for his benefit, we will not discuss this at this time. But the main concept here is that now I have a particular property having all even degrees that allows me to decide whether an Eulerian cycle exists in the graph. This is way simpler to check than the kind of tree search that I was doing at the beginning naively. But without Euler, one would be stuck with using that technique there. So contrast it with 1859 when Sir William Rowan Hamilton was the royal astronomer of Ireland and a professor of astronomy at the Trinity College Dublin, when he decided to start selling a toy called the Icosian game that I'm going to briefly describe to you. You have to imagine this wooden board with grooves and with holes here at the vertices that you can put pegs in. You have number pegs. So the idea is now to define a path through this graph here that instead of passing through all of the edges, visited all of the vertices once without crossing multiple edges by doing so. And well, the toy was a commercial failure, so the guy kept his day job. But the main point here is that even though the problem seems to be very similar to this one, as of today we don't have an efficient way of deciding that property whether or not a graph has a Hamiltonian path, whereas in this case we do have it thanks to Euler. So you might be disappointed in me saying, well, I would like to see that here. OK, it's clear that I have a simple property to test. There we just don't know. We just never found such a property. And maybe in the second part of the lecture, maybe we'll come back to this and show more evidence. But at this time, yes, even though a large number of people have tried to find some equivalent property or equivalent method to solve these kind of problems, it seems that the hardness of these two problems are intrinsically different. So it's not a matter of effort. It seems to be there's something mathematically, there's a mathematical structure behind this that makes that easy and that hard. And this is one example, let's say, it's the first time when we encounter something that is by common in computational complexity where proving lower bounds to the hardness of a problem is more complicated than proving upper bounds. Because to prove that something is not harder than some x, a problem of some x quantification, you just need to exhibit a way that solves the problem in using that amount of resources. But proving that a problem is harder than something would require you to claim that all the methods available to solve that thing actually use more resources than x. So there seems to be an intrinsic hardness. Difference classes is intrinsic hardness to problems that people encounter in normal combinatorial problems. So the next chapter, in order to figure out to, let's say, formalize this property a bit better, is we need to move to the early 20th century, in 1928, when David Hilbert posed a question, which is now known as the unshy dunks problem, whether or not there is an effective method which can decide that a formula is a theorem for some axiomatic system. A few years before the completeness theorem was proven that for whatever, if you're familiar with it, fine. Otherwise, you must think something like logical. First order logic was proven to be correct and complete, meaning that the logical calculus David Hilbert had developed was sufficient to prove all and only the true statement of the logical truth in this particular axiomatic method. But this is a generic mathematical statement. The question is, is there a way for me, if I tell you, if I give you a particular formula, whether that's a theorem or not? And one of the problems in answering this kind of question was that there was no real concept of what an effective method would be. There was intuitive proposal. For example, Alonso Church had developed lambda calculus, which is the basis of functional programming languages today. And Kurt Gödel had developed recursive, the theory of recursive functions, but they all suffered from a basic question of, OK, those functions might be intuitively computable, but why would I believe that those families of function would be all of the computable functions? And from our perspective, the solution to this problem was given by Alon Turing in a famous paper in the early part of 20th century, where he introduced the concept of a Turing machine, which I briefly described now. So formally, a Turing machine you can see as a triple of objects. You have sigma, which is an alphabet, q, which is a finite set of states, and you have a transition function. The idea is that the machine would be able to access a tape, divided in squares. Each of the squares could contain one element of the alphabet, a symbol from the alphabet. And at each time, the machine would have a pointer of a head. I could see only one of the squares in the tape. And a set of instruction will tell you, well, if you're seeing a particular symbol, s, and you are in the state q, then print a symbol s prime, change a state to a new state inside of q, and then either move to the left, to the right, or stay. One can imagine the tape to be, for example, infinitely long to the right, even though that's kind of a decision that you can take. A configuration for a machine should describe the contents of the tape, or you can assign positions here. And this should specify the content of this tape. One of the symbols of the alphabet we take to indicate a blank, meaning it's an empty square, should specify a state q, should specify the position of the head. Now, how would a machine compute something? Well, given an input, you would have to give it an input, so you would have to specify the content of the first n. And the machine would start in an initial configuration, where meaning that, as I said, the first few squares contain the symbols of the input. And the initial state should be a specific state we decide to be an initial one, and the position of the head start in the zero position. Now, from an initial configuration, we can move to a successive configuration following the instruction given by the transition function. So if this, if some phi u is a configuration of time t, then the configuration of time t plus 1 should be given by the following. The tape, away from the position of the head, the tape does not change. And at the position where the head is looking at the moment, then write what the instruction is telling you to write. And as for the rest of the other two elements here, q prime to be the second element. And i prime gets updated based on the third element of the instruction. So plus or minus 1, depending on whether I'm going to the left or the right, or the same. So these two description, these two pieces of information should convince you that if I know my initial configuration and I know the recipe for producing successive configuration given a previous one, then I can define a sequence of configurations that describe the computation of my machine. In particular, since a computation at the end of the day, it's something where I want to produce some final result, I will declare some of the state to be u accepting, which is going to be useful later. And with the idea that once the machine arrives, it enters into an accepting state or a rejecting state, the computation stops. If q is accepting, I can read the non-blank part of the tape as the output of the machine on that particular input. So in general, using this recipe here, we can define computable functions in the sense of a function from typically these are partial functions. So they might not be defined on all of the domain, but only on a subset. The cleaning star is the set of all finite strings of elements that I can build using my alphabet. So a function like this is computable if there exists during machine, say, m of f, the computation, and 2, the output, let's say, m of x, meaning the output of m on input x, should be f of x. Now, from counting arguments, one should immediately figure out that most of these functions here are non-computable. Why? Well, because these machines here are finite objects, and I can define specific representations for these machines using simple strings from a finite alphabet itself. So the main thing that I want to define here is that I want to describe a machine like that using a finite string over a given alphabet so that, in particular, I can feed the source code of a machine into a different machine. And there's multiple ways to do this, but the simple ways to do that, one that comes to my, for example, I could use an alphabet like this, this is close parenthesis, bar, just doing it. But the main point is to encode the transition function here. You should have something. You can give, let's say, a standard enumeration of the states, say, u is going to be u1, un. You give a standard enumeration of your alphabet, s1. And then each single application of the transition function to a specific argument, you can see as an instruction saying you can have, say, for example, let's say that you're trying to write down the instruction saying, if I'm seeing the symbol sA and I'm in the state qB, then write sc, go in the state qD, and then the movement also we can, and this you can use, for example, bars just to, like, say, a, bars here means the first symbol, b, bars here means the second symbol, the b state, sorry. C bars here means the C symbol and the D bars here, the D state. And finally, I can say that one means going left, two bars means going right, three bars means stay fixed. So the transition function is just a finite number of expressions like this. I can use a separator here to denote that this is the first part, the symbol state, symbol state movement. And the description of a Turing machine becomes simply a sequence of expressions like this. There are nicer ways of encoding machines to strings, but it should be a hope clear that you can do something like this. This is easy. And you should know that if you have a finite set of symbols, then the set of finite strings you can make out of it is countable, whereas the number of the functions there has the cardinality of the continuum. So clearly, you should expect that most of these functions are not computable. But OK, it would be somewhat harder to come up with an explicit example of such a non-computable function. And yes, this is something that Turing also proved in its original paper. And that is a well-known example. It's a solution to the halting problem. So let me very quickly go through it. By this symbol, I mean the representation of the machine M as a string. So this function takes two possible arguments. And it outputs one if the machine set on input x eventually reaches a state accepting, rejecting, or at any rate it eventually halts, halts. And it's zero if the machine runs forever, either because it's stuck in a loop or just because the instructions are such that they just never enter in these kind of halting states. So the claim is that this function is not Turing computable. It's not computable by any machine. And the proof is well-known. But let me quickly review it. You prove it by contradiction. Assume that such a machine exists. Then the idea is do you want to build a new machine, say, x, such that the machine first simulates xx. So it's diagonalizes the input. And then based on the output of these calculations, I'm assuming h exists, which computes this function. This function has two inputs here. So what the machine x here is that takes the input. It applies it in the same argument. This is the same as this one. And then let it run. This is going to be trying to see if the machine given by the string x holds or not on its own string, on its own description. And you see that the idea is that this cannot exist, this h cannot exist, because if you ask yourself, so what happens if I were to run the machine x on its own description? If you assume that this one stops, then by the definition there, it means that if this stops, then it means that it runs forever. But if I assume that runs forever, then by construction, it means that h equals 1, which means it stops. So this obviously is contradictory and by contradiction means that this h cannot exist. So f is non-computable. The whole thing problem is non-computable. And this is a proof of that. So what we got here is that after Turing, now we have, rather, Alan Turing defined this benchmark for what an effective method is that people have accepted and so far. And actually, if you go through the literature, you will find that all the previously proposed lambda calculus and recursive functions and the Turing computable functions all identify the same set of functions. So people believe that the currently accepted properly defined algorithm or mechanical effective method. And so that this was encapsulated in intuitively computable function is Turing computable. This is a thesis. This is not a theorem. This is not a proposition. This is not something you can prove. And just a consensus on what an intuitive concept of algorithm or effective method, a method that can be carried out without any oiler having to come up with some brilliant idea. It's a completely mechanical. It's the basis of modern day computers that we have. And on top of having the usual definition of effective method, now we have also a way of benchmarking the hardness of solving problems. Because we can use the runtime of this benchmark machine, meaning the number of steps it has to go through before reaching the end of its calculations, as a measure of the resources needed to solve problems like the oilerian problem we saw before or the Hamiltonian problem we saw before. So the running time of such a machine, which is intuitively meant as the number of configuration in the calculations, is now something we can use as a universal benchmark for deciding hardness of computational problems in general. Let me quickly go through how people have done this so far. And then we accelerate towards less standard material through the representation which I said before. You can use, since the performance of the machine is given by the set of instructions, if I can write down the instructions and then I send it as an input to a different machine, for example, you can use this second machine to scan the instruction and simulate the other machine. In general, it's the thing of saying something, you can take a program and write it down and feed it to another program. Because a program is just a string that describes some sort of abstract machine that is working in this way or some similar equivalent way. And so a machine needs it. I can consider a machine at the same time. This kind of structure here or the description of a machine. And by using the description, I can feed a machine or rather the instructions that describe a machine as well to a different machine. And through this diagonal construction, I eventually get to a contradiction imagining that there's a machine that does this. Clear? What do you mean runs one or zero? May I interrupt you? Maybe I think there's a misunderstanding here in the sense that in the construction, we are assuming that input is always finite. So you cannot feed an infinite input to the machine. That's by construction. The input is infinite in the sense that the tape has blanks all the way to the right. But that's not considered to be part of the input for the machine. The input needs to be finite. And so I would say the argument does not apply here. So I see that we're running a bit late. If you come from computer science, I'm sure you've done your fair share of exercising and tuning machines and stuff. They're not like them of the exciting things in my idea. So now we're at the point where finally we have a proposed benchmark that is universal. You can apply to study hardness of problems. So how do people study hardness of problems? How do you formalize the hardness in this setting using these abstract computational devices? Well, usually what you will encounter is that you will define typically problems in the formalization of hardness are defined as decision problems, meaning the theory of hardness, the most developed theory of hardness is the kind of hardness of answering yes, no questions, which is definitely not the only questions you have. You want your computer to solve. You have all sorts of combinatorial problems you want to find, you want to minimize costs, and all sorts of things. But the most developed theory of hardness is for yes, no questions. So typically this is defined by giving the concept of language or decision problem. I have now from it again the set of all finite strings. A language is a subset of these strings. And the strings that belong to the language, we say, get an answer, yes to the problem. And the elements that do not belong to the language are the kind of instances that we should answer no. So one language we've seen, for example, is the language of deciding whether or not a graph is Eulerian or Hamiltonian. So that's a yes, no question. And some strings we can use to codify graphs that are Eulerian and others, which will not be. So now the way you define hardness here is by defining complexity classes, which are collections of languages. One must see a complexity class as we are giving a computational device a finite amount of resources, typically time, but not necessarily. And we're asking ourselves, so what can that particular computational device achieve given that kind of resources? So again, this is an extremely large area of research in theoretical computer science. So I'm not going to end into lots of details, but basically the most famous classes you have, P stands for polynomial time, and a language belongs to P if there exists a Turing machine, M, such that and a polynomial. So you need to have a Turing machine and a polynomial, such that for all inputs, if input belongs to the language, then the machine enters an accepting state at some point to computation when given excess input. If it does not belong to the language, then the machine enters into the rejecting state. And the computation always terminates in a number of steps, which is bounded by the polynomial of the length of the input rates given. More complicated problem typically require more time. If the time required by the language grows only polynomially, then this is P. These are considered to be the easy problems. What are the examples of these problems? Well, the Eulerian cycle problem, as we said. Two coloring of graphs. If I give you a graph and two different colors, and I ask you whether or not you can color the vertices of the graph so that no adjacent vertices have the same color. This is easy because you just start on one vertex and then alternatively change the neighboring you color differently. There's a number of problems in P and not going to any specific, two polynomials. So the idea is that if the instance belongs to the language, then there exists a certificate that certifies the fact that it belongs so that the machine run on input, the instance itself and the certificate can figure out that this is a correct certificate. So this actually belongs to the language. If it doesn't, then no certificate works. And the certificate needs to be polynomially large in the size of the input of the instance. And also M has to terminate in time polynomial in the size of both of these things. So in particular, notice here that there's no claim made, even though this is saying that certifying the fact that some instance belongs to the language is easy, there's no claim on how hard it is to find such a certificate. There's only, you're only saying there's such a certificate must exist. So there's all sorts of problems that belong to these complexity classes. This Hamiltonian cycle is one. There's satisfiability, which is basically, suppose that I give you a formula in Boolean logic, which is, I don't know, not x one or x two, et cetera. Is there an assignment of truth values to these Boolean variables here that makes this formula true? Or if instead it is the case that any assignment of truth values will make this formula false. Deciding whether a formula is satisfiable or not is an NP problem. This is also true if you only stick to KSAT, which is basically this kind of propositional logic sentence in conjunctive normal form, meaning KSAT is given by a set of clauses and each of the clauses given by exactly K literals. And then you have various of these, and each liter is either some variable i or the negation of some variable i. And this is meant to be interpreted as all of all of these, so this clause here is true if at least one of these literals is true. And you're meant to take a conjunction of all the clauses in your formula. So this is KSAT we'll get back to it soon enough. Notice all problems in P belong to this class here because I can just run the machine that decides the problem in poly time and forget about the certificate. Whatever certificate I'm given, I don't care about it. I decide the problem with the machine that it's supposed to solve it and then I'm done with it. Now for the benefit of time because I want to move on to something juicier, I think I'm going to skip the part on, well NP completeness in particular. The idea is that P is a subset of NP, and this is well known, the question is does the other direction hold? And we don't know. There's a well known open problem in computational complexity. The overwhelming majority of experts believe it does not. So P and NP are actually there exists problem that intrinsically require more than polynomial many steps of computational to solve. But we don't have any proof of that. There's only overwhelmingly amount of attempts that have failed to provide such efficient algorithms. But I think this is pretty much what I'm going to say for computational complexity because I only have 25 minutes left and I want to move to something which brings us closer to the meat of the reason why you guys are here. And the idea is that this is a set up of classical computational, classical computational computability and classical computational complexity. But we want more powerful computers. And let's say an empirical, maybe not even empirical, pragmatic approach that people should take when building computers, as far as I can tell, is that if you see nature in the universe, you see some process that is performing some interesting calculations, you want to provide it to your computer so it can harness its power in a sense. And the first thing that people have figured out before going to quantum mechanics was randomness. So they figured out, and let me try this example. We go back to the 18th century and George-Louis Leclerc, Le Corbe du Buffon who was a French aristocrat and scientist and a polymath, came up with, really, at least in his exploration, came up with this nice experiment in probability theory. So you must imagine that you have a floorboard. You have a wooden floor with these floorboards of, say, high B, and then you have needles of length A and you throw them at random on these floor. And the question is, what is the probability that the needle will straddle two boards like this instead of being completely included in just one, such as this one here? And this problem is easy to solve if you use the correct coordinates. First of all, what we want to say, we want to constrain ourselves only in, since this is quite clearly a problem that repeats itself in this direction, so we will consider, we will describe the position of such a needle using two coordinates. One is the Y, is the distance between the lower end of the needle and the closest horizontal line above it. And the second one is the smallest counterclockwise angle between the horizontal direction and the direction of the needle. So we have two coordinates here and a toss of a needle here is described by a pair, Y, phi, where these two coordinates are sampled uniformly in, so how do you solve this problem, even geometrically, if you want? Well, the probability of hitting, of being this case and not in this case is proportional integral B, D, Y, integral D, phi of a function that is the indicator function of having a hit. This function here is defined one if you must notice that this is the quantity A sin phi. If this quantity is smaller than the other, then you have a hit, otherwise you are in this situation. So graphically, this quantity here is zero otherwise. Graphically you are in this situation. This is the coordinate phi, this is the coordinate B, and this is A sin, this is the area that gives you the probability of hitting. So this double integral here is just the integral of this function here. So you can see that the probability of hitting is just one over B pi, which gives the proportionality constant, the normalization constant here. Integrating this from A sin phi B phi. This is an embarrassingly easy integral, which you can compute to A pi B. And Buffon was fine, was happy with this, but then I think it was Laplace who came along saying, but actually this gives us a way to estimate empirically the value of pi. Because you can set for example, B equals to A, and then you get one over pi. And then you could repeat these experiments here a bunch of times, and then the observed frequencies of number of hits divided by number of throws will converge through the probability of hitting in the limit of large number of throws. So the reciprocal ratio will converge to pi in that limit. And that's not obvious. It's not something immediately easy to do, specifically at the time where people didn't have an excellent way of approximating pi. At the time that was something that people found hard. And this is an empirical way of doing it. It's not a calculation. I would not be able to figure out right away how to define a Turing machine that does that. So one of the ways that I can have to actually enhance my Turing machine, my computational device, is to give it access to randomness so that then it can perform this kind of experiment and approximate pi. So there's different ways of doing it. Again, I'm not really keen on deciding how you want to do it, but the idea that the simplest way that you can imagine doing is that extending the concept of a Turing machine, having an additional register and like a random. And then you can have a different transition function saying that at each step of the computation, the machine can ask the universe to generate a random bit here. And then the instruction will depend on the values of the tape and the content of the random bit. So based on the specific value that the universe generates, the machine will perform different things. And this makes the machine non-deterministic in the sense that different computations of the computation will actually produce different results. And now you have to study the computation that the machine performs as a tree object. So this starts then at some point of the computation, it asks for a random bit. And then now the computation splits. And let's say with probability one half, this becomes, this is a zero and probability one half, this becomes a one. So now I have to follow the cases. And then maybe later on, this splits again. It requires a new random, binary value. And this again splits like one half here, one half here, et cetera. At the end of the day, even if all of these branches hold, so they actually give some value at the end, it does not, the machine does not give you simply an answer, it provides you with an answer with the probability distribution. And also if you were thinking about accepting or rejecting a language, it no longer does that deterministically. It will accept or reject a language with some probability. And then you will have to define a different concept of complexity classes. For example, saying this is, I'm giving access to the randomness to my Turing machine. What are the languages that the Turing machine can accept with high probability? And that gives you, for example, BPP, which is Bounded Error Probabilistic Polinomial Time. I'm not really going into any of those details because again, there are courses taught on any of these topics if you're interested. There's the entire courses on those. But I would like to move to something to press on this idea here and see a few other examples of what randomness can achieve. Here, okay, look, interesting, but it's a curiosity so far because, okay, you can approximate pi. Nice, but is there anything more? The answer is yes, there's plenty more that you can do with randomness. And let's see a few things. The first thing I would like to discuss is how to approximate KSAT instances. You remember, I just described quickly before, a KSAT is a particular form of a propositional Boolean logic formula is given by a number of clauses, let's say M clauses. And each clause is given by exactly K literals that you need to consider, imagine, being connected by disjunction. So it's an O, L1, O, L2, O, L3, et cetera. And the different clauses are connected by conjuncture. This formula needs to satisfy all of its clauses in order for it to be satisfied. So it's not immediately obvious if I were to give you a formula like, I'm sorry, and literals, I remind you, are either a variable, one, or the negation of a variable. So you might know that there's SAT competitions out there. There's people who try to come up with the fastest algorithm to decide SAT, to decide satisfiability. And every year there's something new, and every year there's some new algorithm that is faster from the previous one. So, and this conference is just unsatisfiability. So, one might say, well, it looks like, since this is an NP complete, so one of the hardest problems in the class NP, then you might say, look, it seems like a very hard problem to solve. It's hard to find to decide whether a formula is satisfiable or not. But surprisingly, randomized algorithms are quite powerful in approximating such formulas, approximating in the sense of maybe I won't be able to find the assignment, the truth assignment that either satisfies it or satisfies the largest number of possible of clauses there, which would be the best approximate, the best solution I have, but I can get close enough. If it's satisfiable, maybe I can't find the ones that satisfies all of its M clauses because that's NP complete and it's a myth, but I can satisfy a significant fraction of those clauses and as it turns out, there's a, let's write this down as a theorem. Given SAT formula clauses, the expected number of clauses satisfied by a random assignment, random meaning I just throw Boolean values for these variables at random probability, I flip coins and I sign, oh, what's the value of this one? Zero probability, one half, one with probability, one half. What about the X2? Same thing independently, just purely blind. So the expected number of clauses is actually seven eighths over M, sorry, seven eighth times M, which is seven eighths fraction. It's a surprisingly large fraction. So where does it come from? How can you do something like this? For each clause Cj, I define the variable Zj of X where X is a given truth assignment to be one if X satisfies the J, zero otherwise. And I define Z to be the sum of the Zj's. Now the expectation of this, which is the expected number of satisfied clauses taken over, let's say, a probability, uniform probability of all possible assignments of these random variable by linearity, you can take the expectation inside, you can J, this is due to its binary nature, this is equivalent to say, this is the probability that Cj is satisfied. And once you have one of these, this is easy to see because if you see the disjunction here, this disjunction here is false, only all of these literals are false. So you can just check that provided you have distinct literals, which is without loss of generality, you can assume that all the literals are disjuncted, this one single assignment of those three of the variables appearing in those literals which will falsify the clause. And this eight, two to the three for three SAT, possible assignment. So this is just sum over J, seven, eight equals seven, eight M. And well, I see I have about five minutes left, so I will just say that. This is surprisingly large, but in particular we have proven that there exists at least one truth assignment that satisfies these many clauses. This is not obvious to begin with. Why this is the case? Because it's a usual converse combination argument. The an average value cannot be, let's say, random variables need to take values that are at least as large as its average value. Sometimes, because if all of the values you could take were below the average, then the average would be outside of the possible values that the function, the particular random variable to doubt. So this means that we have proven an existence statement purely through probabilistic arguments. This is a non-constructive proof, and this goes under the name of the probabilistic method. And this is interesting by itself, but I want to move on and challenge you to the last, very last thing that I have. Probably, as I was expecting, I'm not able to cover everything that I have prepared in my notes. You'll probably find some other, some more information about the example of probabilistic algorithms in the notes that are going to be published and made available online. But there's one last thing I wanted to introduce that should open the way to quantum mechanics. And the idea is that, okay, we've seen that randomness can be useful and there is a reason why I might want to give my computer access to randomness to increase the computational powers. Now consider this experiment. This is a simplified version of an experiment that is actually done and was actually performed. And suppose that you have a setup like this. You have a laser here. And here you have what, at the beginning of quantum physics, people would call that half-silvered mirror, but we describe as a beam splitter today. This is an actual mirror. This is again another mirror. And here you have detectors for your photons. These detectors are triggered if they are hit by a photon. So if you were to do some process tomography of these elements, then you would expect, suppose that my laser starts in throws a photon that is traveling in this direction. And I will call, say, zero with this particular notation, the state of the system where the photon is going in this direction. You can see it. You can describe it as a vector like this. I'm going to only consider the possibilities that the photon either goes in the horizontal direction or the vertical direction due to the construction of this process. And then I have another one that is the other one. Now if you were to apply probability theory here, then, okay, how does the actual mirror behave? Well, the mirror flips this photon going in, say this direction into this one and a photon going in this direction into the other one. You should expect that if you want to describe that through some stochastic matrix, it should be something like this. You should flip those states. This one here instead is a half-silvered mirror. What does that mean? It means that half of the times the photon will pass through and half of the times it will be reflected. If I were to put something like this with detectors, photon detectors, and I shine the laser here, half of the times this detector would go off and half of the times this detector would go off. So again, probabilistically, you would expect this to be described by one, one, one, one, one half. Wherever it goes, half of the time is reflected up. Half of the time the photon is transferred. So it's easy to see what you would expect to happen. You would expect that just by splitting the possibilities, I mean, here you have, you could go this way, this way. This has probability one-half. Again, this is a probability one-half again. So this should give you one-fourth. And the other is the same. This, the path here has probability one-fourth. The path like this has probability one-fourth. There's four different paths that the system can take. They all have one-fourth probability. At the end of the process, I should expect to this detector here to go off half of the times and this to go off half of the times. This is not at all what you see if you actually do the experiment. This one will go off 100% of the times. So what does that mean? And this is going to sound outrageous for people who have not seen quantum mechanics before. The claim is that, my claim that I'm going to make here is that probability theory is wrong. This is just not the thing that we used to describe it. We should use a different kind of probability theory where you don't talk about probability. The states are not described by probability by these probability vectors. They're described by something that is more fundamental which is a probability amplitude. So your state is going to be alpha beta where alpha beta are complex numbers. But if you haven't seen complex numbers, you might assume for this particular setup, it's fine to assume that these are real numbers but the main point is that these can be negative. And the only requirement is that theta square should be normalized to one. And all this process here, I should be doing by studying what these devices are doing to the amplitudes and once I arrive here, I should square the amplitudes of the state to get the probability of this one going off or this one going off. And soon enough, after Catherine's presentation, things will become much more easy to understand, the claim is that if you actually believe in quantum mechanics, what you would have is that this one should have a minus one here and this is fine. And what happens is that the parts that go like this and the part that goes like this have one half, give you one half amplitude and minus one half. So when you sum them as before, before we were saying this is one fourth probability and this is one fourth probability and they all get me here so you should sum the probabilities. But now since you no longer have probabilities, you have probabilities, amplitudes which can be negative, they can actually destructively interfere and give you probability zero at the end that you would not expect to see from probability theory, from classical probability theory. And these are phenomena which is purely quantum and they are made one of the main distinction between quantum mechanics, which is this kind of, if you haven't seen it, this kind of quirky version of probability theory and classical probability theory. So here is an example of a process that I cannot explain using a naive interpretation, a modelization through classical probability theory. Going back as before, these are the kind of process I want to make available to my computational device because I could extract something that classical randomness cannot give me. And with this I think I end my presentation here. This was hopefully a nice introduction to Catherine's lecture coming right away, I think. And thank you for your attention and I'm sorry if it was a bit boring for some of you, but I think it was useful to get everyone on the same ground. Jota. Hi, I'm Jota, Jenny. I recall all of you that now we have a group photo that is going to take place just outside on the entrance of ICTP, not where you took the badge, but at the first, no, actually, yeah. Go where you took the badge just outside and wait there and we'll make the photo before restarting for the next lecture.