 Hello everyone, nice to meet you, my name is Jean Barbier, I'm a research scientist here at the ICTP, so welcome. I'm one of the co-organizers of this nice event. So really Hackathon is a novelty at ICTP, I mean we're used to organize many scientific activities. I'm sure that many of you came for some other ones, but this is really a new format for us, it's a new experience so we hope it will be as smooth as possible. If you have any feedback or any questions, anything you want to know along the week, please come to us and I will leave the floor to my friend and colleague Alexandre who is also a co-organizer from the quantum side who is the company with which we are organizing this whole event. So welcome again. Hi everyone, I'm Alexandre, one of the co-organizers, I'm representing quantum, it's a large quantum computing organization, we're across five or six countries, now again Japan, US, Germany, the UK and soon other countries. So this event is I would say this week is the largest event we've ever done with people from 23 countries. It's our way to participate through global education program because we think quantum is going to be a really big thing in the coming decades, so we want to lecture it as much as we can to as many people as we can. If you have any questions this week also don't hesitate to come to me. I think I've added most of you on Slack yesterday, if you didn't get the access on Slack come to me and I'll do anything to help you. Thanks a lot. Now I will leave the floor to Rosario Fazio who's the head of Connors Matter Physics at ICTP. Very much also welcome from me and so let me spend a few words about ICTP for those who visit us for the first time. ICTP was founded more than 50 years ago by the Nobel Prize winner Abdul Salam as it's in the title of the center and with the idea and the mission to foster scientific change among researchers all over the world from emerging countries and for more wealthy countries and without borders related to politics or religion or anything else. Actually in the past ICTP was and it is a place where people from all over the world meet. We have a very strong program of visiting scientists every year. We have several activities as the one you are going to participate this week and actually I strongly encourage to visit our website to see if there are other workshops or conferences of school you might be interested in participating. ICTP has a strong interest in quantum technologies. There are several of us working actively in this field. It's me, Antonello, Marcello Dalmonte, other people through all the section of ICTP. As you probably know we have five sections, five research sections in addition to which was already mentioned, there is QLS, there is mass, there is high energy and there is hard science. And quantum information teams and problems pervade also several sections. In addition and I think Antonello will say a little bit more on this. In the whole area of Trieste there is a large interest in quantum information and quantum technologies. In addition to ICTP a lot of groups at the University of Trieste at CISA which is a Ph.D. school here in Trieste. So a few years ago we decided to gather forces and founded an institute which is a kind of umbrella to coordinate and to stimulate activities in quantum information. Processing the institute is called TQT which is Theoretical Quantum Technology Institute. Essentially most of our activities are, I'm referring to the Trieste area, I mean of theoretical nature. Of course there are a lot of experiments but say in this field up to a few years ago it was mostly theoretical. So we decided to concentrate on this and I guess Antonello will say a few words about TQT. Okay, thanks a lot and welcome. Hello everybody, my name is Antonello Scardicchio, I'm also a researcher here at ICTP and I promise I will be the last speaker before the lecture starts so I won't take much of your time. I think you really want to jump into this fascinating topic now. Okay, so just a few words about the activities that we at ICTP and in the Trieste area we have on the topics, on the general topic that you're going to see this week which is quantum technologies, quantum information, quantum computation. Okay, so this as Saro correctly said before was a very much theoretical topic until really literally like two years ago. Okay, so you guys are now privileged, at least from my point of view, to start your career in this topic having something to put your hands on. Okay, because for us it was like, let's take the Schrodinger equation with the Hamiltonian, let's evolve it in time T, let's see what's going on. More or less what's this, right? So it's now really you're going to do some programming, there is a hackathon. So this was very, this is very nice. I hope you appreciate this, how privileged you are. Okay, so the various activities we have been doing at ICTP. First of all, this is a place where we do research and we do this science dissemination and teaching at doctoral level. And we organize schools, we had one last fall on quantum, what's called from electrons to qubits. So you can imagine what's, there is everything more or less in between. And we have been working on several things in particular also. One of the things that I like to do most is the implications, studied implications of complexity and the theory of disordered systems to quantum mechanics and to quantum computing. There are people who do, Sarah mentioned, Marcello Del Monte, for example, who are more interested in things like quantum simulators. They use cold atoms or other technologies to simulate quantum mechanics, to control these kind of things. There are, Sarah himself is an expert on quantum information. So the study of entanglement, how to use the quantum properties of matter to transfer and process information. Processing of information is computation, okay? So, Gianni was a student here and now is a researcher at NASA working on these topics. And I'm very much looking forward to his introduction to quantum computing. So I will just leave the floor for two seconds to Gianni. Oh, not even that? Okay, let's skip the two seconds. And so I introduced directly Gianni Mossi, flew all the way over from California to teach us about introduction to quantum computing. This will be not the only lecture on introduction to quantum computing, which doesn't mean that you don't have to pay attention. You have to pay more attention because it's really the foundations. And of course, one of the good things about ICTP and having these extended schools and stuff is that the speakers are always around. So if you don't understand something and you have questions and stuff, you can just go and bother him all the time. I mean, you know, coffee breaks, lunchtime. And these were the things that at least I appreciated most when I was a student that was at schools. So please do that, bother the speakers until you have understood what you want to understand or until you have realized you would never understand it. Okay, so thank you very much and enjoy the activity. Thank you. Yeah. So welcome. My name is Gianni. Thank you. Thank you, Antonio. Very nice introduction. I'm in a strange position now because I feel like this kind of John the Baptist figure in the sense that my presentation is going to be the only one where quantum physics is hardly going to enter into it. As part of the introduction to quantum computing, the organizer decided that it would be useful for people to considering different backgrounds and different stages of different people in their career. It would be interesting to have an introduction from classical computing and figuring out where, how come we eventually ended up discussing quantum computing. So for some of you, some of the things that I'm going to talk about are going to be old concepts. Hopefully, you won't have seen every single thing that I'm going to present, but this is my contribution to the hackathon in preparing the way for the second speaker to discuss quantum computation models in more details. So without further ado, the story that I want to tell you is a story with theorems. So this is maybe something we could, this is the way that I would like to present. So a story starts in 18th century Koeningsburg. It's a city at the time of Prussia. And one of the legend has it that at some point one of the pastimes of the people was to figuring out whether it was possible to take a stroll through the city, crossing each and every one of the seven bridges that straddled the river passing through the town. So this is what Koeningsburg looked like at the time. Why do we know these kind of things? Why do we still talk about it? Because this particular problem, finding a path that crosses all of the bridges in Koeningsburg, was an example that spurred Leonhard Euler to write one of his most well-known papers. And the problem is that it's about what we now would call Eulerian graphs. So the Eulerian graph problem is I give you a graph and I want you to find a cycle that passes through all of the edges, occupies all of the edges of your graph. And in general, if you were to try to solve something like this, naively you would think that you would have to do some sort of tree search. You start somewhere, you try to cross into one of the edges of one of the bridges, and then you try and figure out you can follow through until you get stuck and then maybe you need to backtrack. And this is the naive approach one would have in solving these kind of problems. But Euler being Euler, figured out that, well, for Koeningsburg in this paper, this might be doable because there's not that many bridges and not that many structures of land. But then it's by some account has 121 islands and it's connected by 435 bridges. So that seems to be way less doable than this. So what Leonhard Euler figured out is that in what we now call the Eulerian path, every time I visit a node, first of all, obviously you can simplify this into a graph, right? Actually maps us here. This is the same thing, just simplified here. Every time you pass, you visit a node, you also need to leave it. So if you have an Eulerian path, all the nodes need to have an even degree, an even number of edges incident to that particular vertex. And well, maybe this is something we can actually prove. Because what people usually discuss here is that that's a necessary and sufficient condition for being an Eulerian graph. But what Euler actually proves is only the one side of the claim, which means that the fact that people maybe don't particularly know is that this other implication here was also proven, not by Euler though, by some other person nobody remembers. So for his benefit, we will not discuss this at this time, but the main concept here is that now I have a particular property having vertices, having all even degrees that allows me to decide whether an Eulerian cycle exists in the graph. This is way simpler to check than the kind of tree search that I was doing at the beginning naively. But without Euler, one would be stuck with using that technique there. So contrast it with 1859 when Sir William Rowan Hamilton was the Royal Astronomer of Ireland and a Professor of Astronomy at the Trinity College Dublin, when he decided to start selling a toy called the Icosian game that I'm going to briefly describe to you. You must imagine this wooden board with grooves and with holes here at the vertices that you can put pegs in. You have number pegs. So the idea is now to define a path through this graph here that instead of passing through all of the edges, visited all of the vertices once without crossing multiple edges by doing so. And well, the toy was a commercial failure, so the guy kept his day job. But the main point here is that even though the problem seems to be very similar to this one, as of today we don't have an efficient way of solving, deciding that property whether or not a graph has a Hamiltonian path. Whereas in this case we do have it thanks to Euler. So you might be disappointed in me saying, well, I would like to see that. Here, okay, it's clear that I have a simple property to test. There, we just don't know. We just never found such a property. And maybe in the following, in the second part of lecture maybe we'll come back to this and show more evidence. But at this time, yes, even though a large number of people have tried to find some equivalent property or equivalent method to solve these kind of problems, it seems that the hardness of these two problems are intrinsically different. So it's not a matter of effort. It seems to be there's something mathematically, there's a mathematical structure behind this that makes that easy and that hard. And this is one example, let's say, for the first time when we encounter something that is by common in computational complexity where proving lower bounds to the hardness of a problem is more complicated than proving upper bounds. Because to prove that something is not harder than some x quantification, you just need to exhibit a way that solves the problem in using that amount of resources. But proving that a problem is hard, is harder than something, would require you to claim that all the methods available to solve that thing actually use more resources than x. So there seems to be an intrinsic hardness, different classes of intrinsic hardness to problems that people encounter in normal combinatorial problems. So the next chapter in order to figure out to, let's say, formalize this property a bit better is we need to move to the early 20th century in 1928 when David Hilbert posed a question which is now known as the unscheduling problem, whether or not there is an effective method which can decide that formula is a theorem for some axiomatic system. A few years before the completeness theory was proven that for whatever, if you're familiar with it, fine, otherwise you must think something like logical, first order logic was proven to be correct and complete, meaning that the logical calculus David Hilbert had developed was sufficient to prove all and only the true statement of the logical truth in this particular axiomatic method. But this is a generic mathematical statement. The question is, is there a way for me if I tell you if I give you a particular formula, whether that's a theorem or not? And one of the problems in answering this kind of question was that there was no real concept of what an effective method would be. There was intuitive proposal, for example, Alonso Church had developed lambda calculus, which is the basis of functional programming languages today, and Gerdl had developed recursive, the theory of recursive functions, but they all suffered from a basic question of, okay, those functions might be intuitively computable, but why would I believe that those family functions would be all of the computable functions? And from our perspective, the solution to this problem was given by Alan Turing, in a famous paper in the early part of the 20th century, where he introduced the concept of a Turing machine, which I briefly described now. So formally, a Turing machine you can see as a triple of objects. You have a sigma, which is an alphabet, a Q, which is a set of finite sets of states, and you have a transition function. The idea is that the machine would be able to access a tape, divided in squares. Each of the squares could contain one element of the alphabet, a symbol from the alphabet, and at each time the machine would have a pointer of a head, I could see only one of the tape, of the squares in the tape, and a set of instructions will tell you, well, if you're seeing a particular symbol, S, and you are in the state Q, then print a symbol, S prime, change your state to a new state inside of Q, and either move to the left, to the right, or stay. One can imagine the tape to be, for example, infinitely long to the right, even though that's kind of a decision that you can take. A configuration for a machine should describe the contents of the tape, or you can assign positions here, and this should specify the contents of this tape. One of the symbols of the alphabet we take to indicate a blank, meaning it's an empty square. It should specify a state Q, it should specify the position of the head. Now, how would a machine compute something? Well, given an input, you would have to give it an input, so you would have to specify the content of the first some N. The machine would start in an initial configuration, meaning, as I said, the first few squares contain the symbols of the input, and the initial state should be a specific state we decide to be an initial one, and the position of the head start in the zero position. Now, from an initial configuration, we can move to a successive configuration following the instruction given by the transition function. So if this, if some phi is a configuration of time t, then the configuration of time t plus one should be given by the following. The tape, away from the position of the head, the tape does not change, and at the position where the head is looking at the moment, then write what the instruction is telling you to write, and as for the rest of the other two elements here, Q prime to be the second element, and I prime gets updated based on the third element of the instruction. So plus or minus one, depending on whether I'm going left or right, or the same. So these two, these two descriptions, these two pieces of information should tell you, should convince you that if I know my initial configuration and I know the recipe for producing successive configuration given a previous one, then I can define a sequence of configurations that describe the computation of my machine. In particular, we want, since a configuration, since a computation at the end of the day, it's something where I want to produce some final result, I will declare some of the state to be Q accepting, useful later, and with the idea that once the machine arrives and enters into an accepting state or a rejecting state, the computation stops. If Q is accepting, I can read the non-blank part of the tape as the output of the machine on that particular input. So in general, we can, using this recipe here, we can define computable functions in the sense of these are function, a function from, typically these are partial functions, so they might not be defined on all of the domain, but only on a subset. The cleaning star is the set of all finite strings of elements that I can build on using my alphabet. So a function like this is computable if there exists during machine, say M of F, computation, and two, the output, let's say, M of X, meaning the output of M on input X should be F of X. Now, from counting arguments, one should immediately figure out that most of these functions here are non-computable. Why? Because these machines here are finite objects and I can define specific representations for these machines using simple strings from a finite alphabet itself. So the main thing that I want to define here is that I want to describe a machine like that using a finite string over a given alphabet so that in particular I can feed the source code of a machine into a different machine. And there's multiple ways to do this, but the simple ways to do that, one that comes to my, for example, I could use an alphabet like this, this, close parenthesis, but the main point is to encode the transition function here. You should have something. You can just use, you can give, let's say, standard enumeration of the states, say, u is going to be u1, un. You give a standard enumeration of your alphabet, s1, and then each single application of the transition function to a specific argument, you can see as an instruction saying, you can have, say, for example, let's say that you're trying to write down an instruction saying if I'm seeing the symbol sA and I'm in the state qB, then write sc, go in the state qD, and then the movement also we can, and this you can use, for example, bars just to, like, say, A, bars here means the first symbol, B bars here means the second symbol, the Bth state, sorry, C bars here means the Cth symbol, and the D bars here, the Dth state. And finally, I can say that one means going left, two bars means going right, three bars mean stay fixed. So, after transition function, it's just a finite number of expressions like this. I can use a separator here to denote that this is the first part, the symbol state, symbol state movement, and the Turing machine becomes, the description of a Turing machine becomes simply a sequence of expressions like this. There are nicer ways of encoding machines to strings, but it should be hope clear that you can do something like this, this is easy, and you should know that if you have a finite symbol, a finite set of symbols, then the set of finite strings you can make out of it is countable, whereas the number of the functions there has the cardinality of the continuum. So, clearly, you should expect that most of these functions are not computable. But, okay, it would be somewhat harder to come up with an explicit example of such a non-computable function. And, yes, this is something that Turing also proved in its original paper, and that is a well-known example. It's the solution to the halting problem. So, let me very quickly go through it. By this symbol, I mean the representation of the machine M as a string. So, this function takes two possible... two arguments, and it outputs one if the machine set on input X eventually reaches a state accepting, rejecting, or at any rate it eventually holds. And it's zero if the machine runs forever, either because it's stuck in a loop or just because the instructions are such that they just never enter in these kind of accepting, halting states. So, the claim is that this function is not Turing-computable. It's not computable by any machine, and the proof is well-known, but let me quickly review it. We prove it by contradiction. Assume that such a machine exists. Then the idea is that you want to build a new machine, say X, such that the machine first simulates X, X, so it's diagonalizes the input. And then based on the output of these calculations, I'm assuming H exists, which computes this function. This function has two inputs here. So, what the machine X here takes the input, it applies it in the same argument. This is the same as this one. And then let it run. This is going to be trying to see if the machine given by the string X holds or not on its own string, on its own description. So, and you see that the idea is that this cannot exist. This H cannot exist, because if you ask yourself, so what happens if I run, if I were to run X, the machine X, on its own description? If you assume that this one stops, then by the definition there, it means that if this stops, it means that is, by this it means that it runs forever. But if I assume that runs forever, then by construction it means that H equals one, which means it stops. So, this obviously is contradictory and by contradiction means that this H cannot exist. So, F is non-computable. The whole thing problem is non-computable. And this is a proof of that statement. Let me finish here. So, what we got here is that after Turing, now we have, rather, Alan Turing define this benchmark for what an effective method is that people have accepted and so far, and actually if you go through the literature, you will find that all the previously proposed lambda calculus and recursive functions and the Turing-computable functions all identify the same set of functions. So, people believe that this is appropriately, the currently accepted, properly defined algorithm or mechanical effective method and so that this is encapsulated in, intuitively computable function is Turing-computable. These are theses. This is not a theorem. This is not a proposition. This is not something you can prove. It's in just a consensus on what an intuitive, the intuitive concept of algorithm or effective method, a machine, a method that can be carried out without any oiler having to come up with some brilliant idea. It's a completely mechanical, it's the basis of modern-day computers that we have. And on top of having the useful definition of effective method, now we have also a way of benchmarking the hardness of solving problems because we can use the runtime of this benchmark machine, meaning the number of steps it has to go through before reaching the end of its calculations as a measure of the resources needed to solve problems like the Eulerian problem we solved before or the Hamiltonian problem we solved before. The running time of such a machine, which is intuitively meant as the number of configurations in the calculations, is now something we can use as a universal benchmark for deciding hardness of computational problems in general. Let me quickly go through how people have done this so far and then we accelerate towards less standard material. Through the representation which I said before, you can use... Since a machine is basically the performance of the machine is given by the set of instructions, if I can write down the instructions and then I send it as an input to a different machine, for example, you can use this second machine to scan the instruction and simulate the other machine. So in general, it's the thing of saying something, you can take a program and write it down and feed it to another program because a program is just a string that describes some sort of abstract machine that is working in this way or some similar equivalent way. So a machine needs... I can consider a machine at the same time as this kind of structure here or the description of a machine. And by using the description, I can feed a machine or rather the instruction to describe a machine as well to a different machine. So this diagonal... No, it's here. So this diagonal construction, I eventually get to a contradiction imagining that there's a machine that does this. Clear? What do you mean, runs one or zero? May I interrupt you? I think there's a misunderstanding here in the sense that in the construction we are assuming that the input is always finite, so you cannot feed any input to the machine. That's by construction. The input is infinite in the sense that the tape has blanks all the way to the right, but that's not considered to be part of the input for the machine. The input needs to be finite and so I would say the argument is does not apply here. So I see that we're running a bit late, but if you come from computer science, I'm sure you've done your fair share of exercises and tuning machines and stuff. They're not like the most exciting things in my idea. So now we're at the point where finally we have a proposed benchmark that is universal. To study hardness of problems so how do people study hardness of problems? How do you formalize the hardness in this setting using these abstract computational devices? Well usually what you will encounter is that you will define typically problems in the formalization of hardness are defined as decision problems meaning the theory of hardness the most developed theory of hardness is the kind of hardness of answering yes no questions which is definitely not the only questions you have you want your computer to solve you have all sort of combinatorial problems you want to find, you want to minimize costs and all sort of things the most developed theory of hardness is for yes no questions so typically this is defined by giving the concept of language or decision problem I have an alphabet again the set of all of all finite strings a language is a subset of these strings and the strings that belong to the language we say we get an answer yes to the problem and the elements that do not belong to the language are the kind of instances that we should answer no so one language we have seen for example is the language of deciding whether or not a graph is Eulerian or Hamiltonian so that's a yes no questions and some strings will codify you can use the codify graphs that are Eulerian and others which will not be now the way you define hardness here is by defining complexity classes which are collections of languages one must see a complexity class as we are giving a computational device a finite amount of resources typically time but not necessarily and we are asking ourselves so what can that particular computational device achieve given that kind of resources so again this is an extremely extremely large area of research in theoretical computer science so I'm not going to end into lots of details but basically the most let's say the most famous classes you have p p stands for polynomial time and a language belongs to p if there exist a curing machine m such that m and a polynomial so you need to have a curing machine and a polynomial such that for all inputs if input belongs to the language then the machine enters an accepting state at some point it's a computation when given it access an input if it does not belong to the language then the machine enters into the rejecting state and the computation always terminates in a number of steps which is bounded by the polynomial of the length of the input that it's given more complicated problem typically require more time if the time required by the language grows only polynomially then this is p these are considered to be the easy problems what are the example of these problems with the Eulerian cycle problem as we said there are two colouring of graphs if I give you a graph and two different colours and I ask you whether or not you can colour the vertices of the graph so that no adjacent vertices have the same colour this is easy because you just start on one vertex and then alternatively change the neighbouring you colour differently there's a number of problems in p and not in any specific two polynomials so the idea is that if the instance belong to the language then there is a certificate that certifies the fact that it belongs so that the machine run on input the instance itself and the certificate can figure out that this is a correct certificate so this actually belongs to the language it doesn't then no certificate works and the certificate needs to be polynomially large in the size of the input of the instance and also M has to terminate in time polynomial in the size of both of these things so in particular notice here that there is no claim made even though this is saying that certifying the fact that some instance belongs to the language is easy there's no claim on how hard it is to find such a certificate you're only saying there's such a certificate must exist so there's also problems that belong to these complexity classes Hamiltonian cycle is one there's a satisfiability which is basically suppose that I give you a formula in Boolean logic which is I don't know not x1 or x2 etc etc is there an assignment of truth values to these Boolean variables here that makes this formula true or if instead it is the case that any assignment of truth values will make this formula false deciding whether a formula is satisfiable or not is an NP problem this is also true if you only stick to KSAT which is basically this kind of propositional logic sentence in conjunctive normal form meaning KSAT is given by a set of clauses and each of the clauses is given by exactly K literals and then you have various of these and each literal is either some variable i or the negation variable i and this is meant to be interpreted as all of all of these so this clause here is true if at least one of these literals is true and you're meant to take a conjunction of all the clauses in your formula so this is KSAT we'll get back to it soon enough notice all problems in P belong to this class here because I can just I can just run the machine that decides the problem in poly time and forget about the certificate whatever certificate I'm given I don't care about it I decide the problem with the machine that's supposed to solve it and then I'm done with it now for the benefit of time because I want to move on to something juicier I think I'm going to skip the part on well NP completeness in particular the idea is that P is a subset of NP and this is well known the question is does the other direction hold and we don't know there's a well known open problem in computational complexity the overwhelming majority of experts believe it does not so P and NP are actually there exist problems that intrinsically require more than polynomial many steps of computational to solve but we don't have any proof of that there's only an overwhelmingly amount of attempts to have failed to provide such efficient algorithms but I think this is pretty much what I'm going to say for computational complexity because I only have 25 minutes left and I want to move to something which brings us closer to the meat of the of the reason why you guys are here and the idea is that this is the set up of classical computational computational complexity but we want more powerful computers and let's say an empirical or maybe maybe not even empirical pragmatic approach that people should take when building computers as far as I can tell is that if you see nature in the universe you see some process that is doing some so it's performing some interesting calculations you want to provide it to your computer so it can harness its power in a sense and the first thing that people have figured out before going to quantum mechanics was randomness so they figured out and this let me try this example we go back to the 18th century and George Louis Leclerc Le Cove du Buffon who was a French aristocrat and scientist and a polymer came up with necessarily in his operation came up with this nice experiment in probability theory so you must imagine that you have a floor board you have a wooden floor with these floor boards of say I B and then you have needles of length A and you throw them at random on this floor and the question is what is the probability that the needle will straddle two boards like this instead of being completely included in just one such as this one here and this problem is easy to solve if you use the correct coordinates first of all what we want to say we want to constrain ourselves only in since this is quite clearly this problem that repeats itself in this direction so we will consider we will describe the position of such a needle using two coordinates one is the Y is the distance between the lower end of the needle and the closest horizontal line above it and the second one is the smallest counterclockwise angle between the horizontal direction and the direction of the needle so we have two coordinates here and a toss of a needle here is described by a pair Y Phi where these two coordinates are sampled uniformly so how do you solve this problem even geometrically if you want well the probability of hitting of being in this case and not in this case is proportional an integral B DY integral Phi of a function that is the indicator function of having a hit this function here is defined one if you must notice that this is the quantity A sin Phi if this quantity is smaller than the other then you have a hit otherwise you are in this situation so graphically this quantity here is 0 otherwise graphically you are in this situation this is the coordinate Phi this is the coordinate B and this is A sin this is the area that gives you the probability of hitting so this double integral here is just the integral of this function here so you can see that the probability of hitting is just 1 over B Pi which gives the proportionality constant here integrating this from A sin Phi B Phi this is a embarrassingly easy integral which you can compute to A Pi B and Buffon was fine was happy with this Laplace who came along saying but actually this gives us a way to estimate empirically the value of Pi because you can set for example B equals to A and then you get 1 over Pi and then you can repeat this experiment here a bunch of times and then the observed frequencies of number of hits divided by number of throws will converge to the probability of hitting in the limit of large number of throws so the reciprocal ratio will converge to Pi in that limit and that's not obvious it's not something immediately easy to do specifically at the time where people didn't have an excellent way of approximating Pi at the time that was something that people found hard and this is an empirical way of doing it it's not a calculation I would not be able to figure out right away how to define a Turing machine that does that so one of the ways that I can have to actually enhance my Turing machine is to give it access to randomness so that then it can perform this kind of experiment and approximate Pi so there's different ways of doing it again I'm not really keen on deciding how you want to do it but the simplest way that you can imagine doing is that extending the concept of a Turing machine an additional register and like a random and then you can have a different transition function saying that at each step of the computation the machine can ask the universe to generate a random bit here and then the instruction will depend on the values of the tape and the content of the random bit so based on the specific value that the universe generates, the machine will perform different things and this makes the machine non-deterministic in the sense that different realizations of the computation will actually produce different results and now you have to study the computation that the machine performs as a tree object so this starts then at some point it asks for a random bit and then now the computation splits and let's say with probability one half this becomes this is a zero and probability one half this becomes a one so now I have to follow the cases and then maybe later on this splits again it requires a new random binary value and this again splits like one half here one half here et cetera at the end of the day even if all of these branches holds so they actually give some value at the end the machine does not give you simply an answer it provides you with an answer with probability distribution and also if you were thinking about accepting or rejecting a language it no longer does that deterministically it will accept or reject a language with some probability and then you will have to define a different concept of complexity classes for example saying I'm giving access to the randomness to my Turing machine what are the languages that the Turing machine can accept with high probability and that gives you for example BPP which is bounded error probabilistic polynomial time I'm not really going into any of those details because again there are courses taught on any of these topics if you're interested there's entire courses on those but I I would like to move to something to press on this idea here and and see a few other examples of what randomness can achieve here look interesting but it's a curiosity so far because ok you can approximate Pi nice but is there anything more the answer is yes there's plenty more you can do with randomness and let's see a few things the first thing I would like to discuss is how to approximate KSAT instances you remember I just described quickly before a KSAT is a particular form of propositional Boolean logic formula is given by a number of clauses let's say M clauses and each clause is given by exactly K literals that you need to consider imagine being connected by disjunction so it's an O L1, O, L2, O, L3 etc and the different clauses are connected by conjunctions so this formula needs to satisfy all of its clauses in order for it to be satisfied so it's not immediately obvious if I were to give you a formula like I'm sorry and literals I remind you are either a variable one or the negation of a variable so you might know that there's SAT competitions there's people who try to come up with the fastest algorithm to decide SAT to decide satisfiability and every year there's something new and every year there's some new algorithm that's faster from the previous one so and this conference is just unsatisfiability so one might say well it looks like since this is an NP complete so one of the hardest problems in the class NP then you might say look it seems like a very hard problem to solve it's hard to find to decide whether a formula is satisfiable or not but surprisingly randomized algorithms are quite powerful in approximating such formulas approximating in the sense of maybe I won't be able to find the assignment the truth assignment that either satisfies it or satisfies the largest number of possible of clauses there which would be the best approximate the best solution I have but I can get close enough if it's satisfiable maybe I can find the one that satisfies all of its M clauses because that's NP complete it's a mess but I can satisfy a significant fraction of those clauses and as it turns out let's write this down as a theorem given three SAT formula clauses the expected number of clauses satisfied by a random assignment random meaning I just throw boolean values for these variables at random probability I flip coins and I sign what's the value of this one zero probability one half one with probability one half what about the x2 same thing independently just purely blind so the expected number of clauses is actually 7 8 over M 7 8 times M which is 7 8 fraction it's a surprisingly surprisingly large fraction so where does it come from how can you do something like this for each clause CJ I define the variable ZJ of X where X is a given assignment to be one if X satisfies the J zero otherwise and I define Z to be the sum of the ZJs now the expectation of this which is the expected number of satisfied clauses taken over a probability uniform probability of all possible assignments of these random random variable by linearity you can take the expectation inside J this is due to its binary nature this is equivalent to say this is the probability that CJ is satisfied and once you have one of these this is easy to see because if you see the disjunction here this disjunction here is it's false only all of these literals are false so you can you can just check that provided you have distinct literals which is without loss of generality you can assume that all the literals are disjunct disjunct this one single assignment of those three of the variables appearing in those literals which will falsify the clause and this 8 2 to the 3 for 3 SAT possible assignment so this is just sum over J 7 8 equals 7 8 m and well I see I have about 5 minutes left so I will just say that this is surprisingly large but in particular we have proven that there exists at least one truth assignment that satisfies these many clauses this is not obvious to begin with why this is the case because it's a usual converse combination argument an average value cannot be the take random variables need to take values that are at least as large as its average value sometimes because if all of the values you could take were below the average then the average would be outside of the possible values that the function the particular random variable to doubt so this means that we have proven an existence statement purely through probabilistic arguments this is a non-constructive proof and this goes under the name of the probabilistic method and this is interesting by itself but I want to move on and challenge it to the last very last thing that I have probably as I was expecting I can't I'm not able to cover everything that I have prepared in my notes you'll probably find some more information about the example of probabilistic algorithms in the notes that are going to be published made available online but there's one last thing I wanted to introduce that should open the way to quantum mechanics and the idea is that that randomness can be useful and there is a reason why I might want to give my computer access to randomness to increase the computational powers now consider this experiment this is a simplified version of an experiment that is actually done and was actually performed and suppose that you have a setup like this you have a laser here and here you have what at the beginning of quantum physics people will call that half silvered mirror but we describe as a beam splitter today this is an actual mirror this is again another mirror and here you have detectors for your photons these detectors are triggered if they are hit by a photon so if you were to do some process tomography of these elements then you would expect suppose that my laser throws a photon that is traveling in this direction and I will call say zero with this particular notation the state of the system in this direction you can describe it as a vector like this I am going to only consider the possibilities that the photon either goes in the horizontal direction or the vertical direction due to the construction of this process and then I have another one that is the other one now if you were to apply probability theory here then okay, how does the the actual mirror behaves? Well the mirror flips a photon going in this direction into this one and a photon going in this direction into the other one you should expect that if you want to describe that through some stochastic matrix should be something like this you should flip those states this one here instead is a half silvered mirror, what does that mean? it means that half of the times the photon will pass through and half of the times it will be reflected if I were to put something like this with detectors photon detectors and I shine the laser here half of the times this detector would go off and half of the time this detector would go off so again probabilistically you would expect these to be described by one, one, one one, one half wherever it goes half of the time is reflected up half of the time the photon is transferred so it's easy to see what you would expect to happen you would expect that just by splitting the possibilities here you have, you could go this way this way this has probability one half again this is the probability one half again so this should give you one fourth and the other is the same the path here has probability one fourth the path like this has probability one fourth there's four different paths that the system can take they all have one fourth probability at the end of the process I should expect to this detector here to go off half of the times and this to go off half of the times this is not at all what you see if you actually do the experiment this one will go off 100% of the times so what does that mean and this is going to sound outrageous for people who have not seen quantum mechanics before the claim is that the claim that I'm going to make here is that probability theory is wrong this is just not the thing that we use to describe it we should use a different kind of probability theory where you don't talk about probability the states are not described by probability by these probability vectors they're described by something that is more which is a probability amplitude so your state is going to be alpha-beta where alpha-beta are complex numbers but if you haven't seen complex numbers you might assume for this particular setup it's fine to assume that these are real numbers but the main point is that these can be negative and the only requirement is that beta-square should be normalized to one and all this process here I should be doing by studying what these devices are doing to the amplitudes and once I arrive here I should square the amplitudes of the state to get the probability of this one going off or this one going off soon enough after Catherine's presentation things will become much more easy to understand the claim is that if you actually believe in quantum mechanics what you would have is that this one should have a minus one here and this is fine and what happens is that the parts that go like this and the part that goes like this have one half give you one half amplitude and minus one half so when you sum them as before before we were summing we were saying this one fourth probability and they all get me here so you should sum the probabilities but now since you have you no longer have probabilities amplitudes which can be negative they can actually destructively interfere and give you probability zero at the end that you would not expect to see from probability theory from classical probability theory and this is a phenomenon purely quantum and you make one of the main variations between quantum mechanics which is this kind of if you haven't seen it this kind of quirky version of probability theory and classical probability theory so here is an example of a process that I cannot explain using an naive interpretation a modelization through classical probability theory going back as before these are the kind of process that I want to make available on my computational device because I could extract something that classical randomness cannot give me and with this I think I end my presentation here this was hopefully a nice introduction to Catherine's lecture coming right away I think and thank you for your attention and I'm sorry if I if it was a bit boring for some of you but I think it was useful to get everyone on the same ground I recall all of you that now we have a group photo that is going to take place just outside on the entrance of ICTP not where you took the badge but at the first no actually yeah go where you took the badge just outside and wait there and we'll make the photo before restarting for the next lecture they're all yours ah no worries very nice I've never seen such an introduction it makes sense I was like wow yeah motivation I've never known that yeah I was trying to make it I mean road yeah many viewers in the first time when you present something put together like once you run through it I think it won't be too long it's not too short yeah yeah exactly overall do you need to yeah I need this but I just want to set this up right now so I don't know if there's people that can help you with this it should be straightforward anyway let me take my stuff just taking some attention