 a lot of lectures, hackathon, competition, and the amount of code you've written in the past few days. You worked really hard in the last two days about different use cases. So we have 16 teams and 16 use cases, all different. Today you're gonna give your presentation. So we did split the 16 teams into two groups of eight teams and each group will have a different jury and the best two teams from each jury will be said to be the winning teams. The use cases you've been working on for the past two days or so were a combination of use cases brought by the industry and internal use cases brought by Quantinium and those are, I would say, much more complex and major than use cases you could have found two or three years ago. And it's a testimony of how the whole industry or the whole field kind of evolved in the past two days. So first of all, thank you very much for being part of that. It's not a trivial effort by far. So I recognize that when I saw all the different teams in different rooms you were working hard and those are not easy problems. If the problems you were tackling in two days could be solved in two days, people would know it and the industry would be totally different. But this is only the beginning. You've been working in the past two days with a team. Maybe gonna collaborate again and continue working with the teams. And maybe the mentors, either the mentors from Quantinium or the mentors from the industrial companies and I actually strongly encourage all the different companies that came to the Hackathon to re-invite again the participants internally to give internally the presentations to keep a momentum after the Hackathon. Because we don't want the Hackathon, we don't want the whole quantum efforts to stop tonight. We want to have a momentum and continue because we're building together something that is not only for this week but for the next years or decades to come. So you will be given your presentations this morning until around 12-ish. So every single team is gonna give a presentation of 10 minutes plus five minutes of questions and answers. As I told you on Friday morning, yeah, Friday morning, you will be judged by both the jury members and the mentors. So the ratio of the marks is like two foot for the jury and one foot from your mentors. And after we're done with the presentation, we'll reconvene this afternoon in an exceptional venue in Central Trieste called La Salar Picola Finice. We'll have shuttles bringing you from the ACTP to there. And we'll have welcome speech from the director of the ACTP and the founder of Continium and we'll be giving all the awards and we'll have a fun cocktail time and you'll be able to network with the people from the different companies or from Continium and you'll be able to relax after this intense week and I know what this is. So on this short introduction, I will let Thomas reindicate again the splitting of the two teams. I'm gonna say the team names assigned to the two juries and he'll be the master of the ceremony. So Thomas, please go ahead. We did all the foot for 140 people, so you better eat. It's gonna be very quick. I'm just gonna say which team will move to the second room, our La Stasi on the first floor. And can you hear me? No, further away, okay, perfect. And which teams will stay here? So in the team that will leave, our team number two, Air Jordan Wigner, the team number three, with no names, I don't know, team number six, Annie Sun, team number eight, Captain Quantum, team number 11, D Ractors, team number 12, ZX, team number 13, Alpha plus minus Alpha, and team 14, Q Disco search. You can already stand up. Oh, and use your laptop for the presentation, actually. Yeah, if that's okay. And every other team, I'm just gonna repeat them for to avoid confusion, but every other team stays here. So that will be team one, Quantum Antifroats Squad, team four, GBQ, GBQ. I don't know, I don't know. Yeah, yeah, that works. Team five, five guys, team seven, Average Dodo Enjoyers, team nine, no name, team 10, no name. Team 15, Quantum Firecats, and team 16, Eigen Criminals. Okay, and that will be the order of passage. So I will welcome team one, Quantum Antifroats Squad to start set up con calma, and there's no rush for now. I don't know, I can't get out of the presentation. Okay. We're just waiting for the talk. Okay, perfect. Let's go. I can use it with the mic. Okay, go. Okay, go. I'm just gonna go. We're just gonna go. All right, go. And it's up, like, pure stuff, like, use this one, pure moving, turn on this one, and then you're ready. It's working, so I'll double check. I'll be right back. Prova, prova. You need to just slide up. Yes. For talking, please. Yes, for talking, please. Just speaking for David Thierry. So can you hear me? Yeah. Hello, everybody. We are Quantum Antifroats Squad. As the name of the team says, we were tasked by Intesa San Paolo to solve a fraud detection problem with Quantum Machine Learning Algorithm. So in this presentation, we will start by seeing the data and then apply classical methods to analyze them. Then we will move to Quantum and compare the results of the two approaches. So let's start with data. We had a highly unbalanced data set containing 500,000 financial transactions. And as you can see in this graph, I already had 10 transactions per user. We recovered 99% of our distribution. The data set has 15 different features. Some of them are continuous, like the timestamp at which the transaction occurs in time, and others are categorical, like, for example, the ID of the bank user. We have just one column for the target, which is the fraud column. Here, the entries are binary, and zero stands for illegal transactions, while one stands for a fraudulent one. The problem is that this data set is highly unbalanced. In fact, the ratio of the ones over the zeros, it's like 2%, so the ones are 2% of our total data set, so to speak. And we tried to overcome this problem with quantum computing methods, but we also applied some classical methods to see the performance and compare them. So, first of all, we did some reduction of the data. The first one, we applied PCA to reduce the features from 15 to two. And second one, we reduced the total number of entries because we need to produce some results before the end of this hackathon, so it was quite needed. For example, here we have a population of 1,000. You can see the two reduced features and how they displaced themselves in the plane for the train set and the test set. In orange, we outlined the faulty transactions and in blue, the regular one. And as you can see from this graph, there is no trivial kernel function which can split the faulty from the legal ones. So this problem is not trivial. And as I was saying, first of all, we applied some classical machine learning algorithms. We applied logistic regression which squashes the results of the predictions between zero and one to interpret them as probabilities. We also applied Axe-G-Boosts which relies on a three-digizions algorithm and also a support vector machine which tries to find the best hypersurface which splits the true predictions from the false one. We also tried to use a time series approach for this dataset, but we concluded that it was not possible way. And so we just stopped trying to solve this problem with a time series approach. So let's move to the quantum part and we'll let you speak my teammate Antonio. So related to the quantum approach, we use a variational quantum circuit which is a hybrid model composed by a quantum evaluation and a classical optimizer. The variational quantum circuit is composed of an encoding circuit in green here and variational part in red. And the encoding part is used to embed the data in the quantum circuit while for the variational is composed of rotational angle, rotational gates, and the angle parameter are optimized classically. Moving to the model developed, for the encoding, we use an encoding with Eric's gate and Eric's Ceylon gate. And then for the variational part, it's composed of a rotational gate followed by an entangling blocker with C naught. And this kind of circuit is also developed by Maria Schulte in this article, circuit-centric quantum classifier. And for our model, we use eight different parameter layer. And another technique developed is the re-uploading technique which consists of re-encoded data each time a new parameter block is added. And we try the model without this technique. Now we proceed analyzing the obtained results. As my colleague mentioned, we performed the test considering 10 number of epochs and using as an encoding technique Eric's and Eric's Ceylon and with two features. On the Ceylon axis, we have the recall which is the most important figure of merit for this kind of application which is the percentage of samples correctly classified for the belonging the class of interest that in this case is the class one, one of the fraud. And we have on the other axis the number of transactions for which we perform the test. We have that the quantum methods, the variational quantum classifier is the purple one and we identify the application or not of the re-uploading technique with a different form of the spot in particular. We have that the square purple spot which corresponds to the variational quantum circuits without the re-uploading technique is the best one for what concern the recall figure of merit and so how to perform all the other classical methods considered for comparison. Then in this slide, we can see the training time of all the considered method has function of the number of transactions. We have that the variational quantum circuits is clearly the slowest one but in particular we have also that without re-uploading technique is faster. The quantum methods is the slowest one because we choose to perform the test using a classical simulator of a quantum computer in order to evaluate the quality of the algorithm, the quality of the approach without ever issues related on non-IDL phenomena. Then we can observe the same type of trend if you notice the runtime as function of the number of transactions and so also in this case the variational quantum circuits is the slowest due to the classical simulation of a quantum computer which requires a lot of metrics vector multiplication. In this slide we try to consider together the recall which is the most important figure of merit and the training time fixing the number of transactions at 10,000. We have that the variational quantum circuits without the re-uploading technique, the par-pol square is the best one for what concern the performance and so the recall but it's clearly not really the worst one for what concern the training time but in general the two quantum approach are the worst one. The really worst one is the variational quantum circuits with the re-uploading technique. The same trend can be identified if you consider the runtime as function of the recall and how our work prove that quantum computing is promising for overcoming some limitation of classical machine learning and that in particular can help in application in which the data set is strongly unbalanced like this one. We can further improve our model with more time clearly changing the type of encoding and identifying the best one with the possible literature and also optimizing all the hyperparameter involved in the model and clearly training and runtime should be reduced by exploiting the real hardware but when the non-IDL phenomena will be reduced with better type of quantum hardware. Thank you for your attention. We are hoping for questions. I thank you for the presentation. When you compare the classical models with the quantum how do you make sure the comparison is fair? Like what size of the classical, what architecture of each classical model is the fair one to compare against a given variational quantum circuit of a specific size and nonsense? We use the same data set so with the APCA apply with the same type of rescaling apply in both cases. Moreover, we consider the same condition, operating conditions of the same threshold for classical method and quantum method. All these things have a more fair possible comparison between the classical and quantum method. Okay, so the way you preprocess the data set is the same, it's the same data set of course. The same size, so the same number of transaction that is the most important. But what about the models? Because the models have hyperparameters, right? The model, sorry. The models can have hyperparameters or not? Yes, have hyperparameters and the parameters that we, that is the same in particular is for example the threshold. So when we measure in the classical, using the classical method, we set the threshold to 0.1 and also for the variational quantum circuit. So in this way, it's the same, the reading could be quite the same. So it are comparable for this kind of stuff. To have the more fair possible comparison. Did you, you probably didn't have time, but I assume all these simulations with the quantum simulator was noiseless, sorry. Noiseless, yes. This might be an unfair question. Do you have a sense of how it might perform with noise? Oh, I don't. No, we have not tried with noise. Because it's for evaluating the quality of the method and having the noise can clearly decrease too much the performance for having a fair comparison with the classical method with have not this type of problem. And could you remind me how many shots you needed to, or how many, yeah, how many shots you needed to train the? How many shots, no, we use, I don't remember maybe one or one, no, 10,000 maybe. I have to check that. Maybe you said this already. What is the state of the art? What is the state of the art? The state of the art, for the state of the art, for the encoding method, there are different approach. There is the amplitude encoding, which consists of encode the data as amplitude, as the amplitude of the state, sorry, I have, and we can say that for this kind of model, we developed a new type of encoding, but the RX, Erypsilon type of circuit, why this? Because we think that this kind of encoding should can obtain good performance for this kind of classification because adding some complexity on the model. And so for the, considering the difficulties of the data such that it's better have a more complex encoding while amore is a one. And for the amplitude encode, while for the amplitude encoding, there are some limitation for the accuracy, generally. I've tried this. And last question. What do people use in the real world? That's what I mean by the state of the art. What models are used in industry for this model problem? For this kind of model, in the industry, so in Tesla, they tried this kind of, this kind in the classical world. The classical world, what is the method that they use? Or generally they use a logistic, it's a G-Boost SVM, generally. So the model that... So it's one of the things you tried? Yes. The best one, classically, is the logistic regression currently, but we also form a hit with our model. All right, thank you. You're welcome. Let's give them a round of applause. Team two, I mean, the second team here, the team four, yeah. Yeah, does it connect with... Yes, and with cable, cool. And the pointers, or it was from them? Them, but we have the top-bottom point, okay? And does it also pass the slides, or we have to pass on the water? If you have the water? Yeah, I'm gonna have to make it easier for you. Yes, I'll take it out then, just to put it here. Okay, perfect. And this is mirroring the screen, or extending? He's setting something up. I have to go to display, please. Right? Yeah, sure. Would you use the microphone as one of the... Thank you. Of course, sir. Yes, you need to speak in front of all the mic. Why is it not... We can try with the... I hear it says that the mirror is the other one. Or you can send by the presentation. Usually, with the adapter, we do directly HDMI here. There's an HDMI here. I think we can try. It's not appearing. Yeah, it sounds like it. It's the last version, it sounds like it. Okay, so from there, I think we'll take some of that. Oh, okay. Okay, so the pointer... Please speak out, we see in front of the mic. I remember. So the pointer, how does it work then? Just the pointer, pass the point. Ah, okay. Okay, so you should point like this, right? Yeah, exactly, change the pointer. Okay. No, no, the pointer... Ah, okay, okay. Does it change? Yes. Oh, cool, okay. But should I... So if I should stay here, I cannot really see the lights too much. So you can stay here, come here. But the microphone doesn't work, right? No. Okay, oh, I just can just stay here and it's fine. I can point even with the arrow on the... Yes, yes, it must be... Okay. You have the cordon and the lightsaber... Okay, okay, it's fine. But can we just start when you want? I can start? Okay, okay, nice. Okay, good morning. We are Group 4 and we work with Generale on the analysis of systemic credit risk on INSC devices. And we implemented an approach based on QAOA and digitalized counter-diabatic QAOA. So we were supervised by Matteo and mentored by Yoshi. Just as a brief outline of the problem, financial markets and institutions are nowadays highly interconnected, like in a network. So the default or the downgrade, which means the failure or the risk or failure of a company can affect other companies. And in particular, there can be a contagion effect just like is in a network. So it is interesting to try to estimate quantitatively this contagion effect through the market. In our case, we were concerned with a smaller subsection of the financial market that we can think about as a portfolio. So a portfolio for our interests is just a collection of companies that we can label from one to N, on which we invested if, for instance, we are the owners of the portfolio. So if you map each company to the vertex of a graph, then you can ask yourself the following question. Say, for instance, a company one fails, does the risk propagate to company three with a probability that is larger than zero? If so, you draw an edge of a directed weighted graph with the weight of the edge that is proportional to the probability of contagion. So in the end of the day, the portfolio can be represented as a directed acyclic graph with weighted edges. And in particular, in this example, you can see, for instance, that the number one is a big company that is affecting most of the market that is interconnected with. So if company one fails, it is a problem for the rest of the network, while the seven can be thought as a small startup. It is influenced by the rest of the network, but itself, if it fails, it doesn't really influence the rest of the financial market. Yeah, sure. Yes, yes. I mean, the direction is important. So you can have an arrow pointing from one to three. It means that if one... You can also have an arrow pointing from three to one. Yes. In the same network. Okay, this is actually, if you write a cost function, you pay a cost for it. So it's something we don't want in the solution. So in principle, an optimal solution should not have these double arrows. You can have it, but in the cost function, if you have a term like this, you pay an energy cost. Okay, so there is a penalty for it somehow. Because it's usually not... I mean, it can be the case that both companies influence each other. It can be the case in the cubo formulation that I'm gonna talk about shortly, which is taken from other papers. They really don't... I mean, it's possible, but they don't put a probability on that it is high. It's something like a rare, a small probability for these configurations. So in practice, in our problem, we have a dataset, which is a collection of time series. We have a time series for each of the companies of the market. And in particular, this data, I mean, they're a little bit complicated and coming from the financial world, they are the full credits drop. But for our interests, I mean, for our concerns, they are basically an estimate of the risk. So if you have high values of these functions, it means that you have a high risk of failure. For instance, here, you see this peak in 2020. It was due to the pandemic shock. While here, more recently, this year, there was this crisis initiated by the Silicon Valley Bank that propagated through the network. And you see that also credit, we had a very high risk of default in India and it actually defaulted. And the same, you see there is a correlation with the other companies. So I mean, I don't have time to go through the details, but the idea is that you take this time series, you take a snapshot of them, so a finite interval of time, and you also translate in time these snapshots so that you count for causality from one snapshot to the other. And then what you want to do is, given this data set, it's an inverse problem. You want to reconstruct the optimal graph that is describing this propagation of risk through the network. So let's say that after two plus hours of interest in modeling calculations, this can be mapped into a problem. And in particular, you have different terms in the Hamiltonian. You have a cost function, which is just representing the cost of a graph. So you want to minimize it. And then we have some penalties. So we have one penalty that is accounted for the existence of a maximum number of parents for a node. So this means that for instance, if you fix M equal to, you pay an energy cost in this configuration where you have three parents pointing to the same node. While, yes, you also have a penalty if you have three cycles because it is something you don't want to have usually. So if you have N companies and you fix this parameter M equals two, you have a Kubo formulation with a number of binary variables that scales quadratically. And you can do the usual encoding, the base encoding in the quantum spins and powers. So the name of our group is generally battery-fitties-quantum and we also added gate-based because the previous work was done with quantum manilers which have usually a limited connectivity and you are restricted to Kubo problems. In our case, it is M equal to. You can demonstrate if you have M larger or equal than three. It's not a Kubo problem. It implies higher order terms in the Z-parries. So we use a gate-based approach. And in particular, I mean, I don't have the time to go through the details of QAOA, but it is basically a hybrid classical variation algorithm where you have an answer that is layered. Each layer has a structure. In one case, QAOA, you have a mixer term generated by Pauli Axe and the problem in Miltonian. In the other case, you have a term coming from the theory of counter-dabioticity. So the main improvements of our methods in the state of the art are that we have all-to-all connectivity. For instance, in quantum devices, we perform the simulation on the emulator and we can think about future work. And also we can go beyond Kubo because you can implement higher order terms with the usual decomposition into gates. So I say that the number of qubits scales quadratically in the number of companies. So the number of nodes of the network, but actually there can be a dimensionality reduction that can be performed with some missiles. And you see that you still have a quadratic scaling but this is quite improved in practice. So you have the number of qubits on the y-axis. And you see that, I mean, you can perform simulation with a larger number of companies also on these devices. So now I will leave the floor to my friend for the results. The algorithm for dimensionality reduction was already implemented, but the idea is that if you have a matrix that is described in the correlation between companies, you put basically a threshold on the correlations and you ignore those matrix elements that are small enough. So you kind of put a threshold. So the matrix is now sparse, sorry? Yeah, yeah, exactly. So now you have a sparse matrix and this makes it easier. You know, you can reduce the number of qubits that you need for your encoding. I mean, to be honest, I didn't have the time to go through the details. So how you actually find this Kubo formulation is a long calculation, but if you put the constraints, the number of variables you need are less. Yeah, so for results, basically the first thing we wanna show here is the actual optimization problem, the outcome of the optimization using QAOA and DC QAOA. So we ran this problem for a thousand steps for both three, four and five companies and this corresponds to nine, 10, and 15 qubits. And here we're basically showing that for each problem, we ran 20 instances of QAOA and DC QAOA. And in the graphs we have as the dashed line is the average of these 20 instances. The shaded area is basically the standard deviation from it and the solid colors are the best case. So the case that achieved the better result. In this case, the better result what we want is to achieve higher values closer to one because this is the normalized energy. So we divide the energy that we are finding in our circuit from the exact ground state energy. So higher is better. And as we can see here, DC QAOA outperformed by far QAOA in this case, we found using QAOA algorithms, specifically both cases for P equals one, so for one layer, we found with QAOA, let's say half of the actual energy that we wanted to achieve and DC QAOA usually converged way closer, like closer to 90 a lot percent, let's say. Another interesting outcome is actually analyzing. As I said, we ran this in problem for 20 instances for each case for like three, four and five companies. So it's interesting to see the success probability for each one of these instances that a problem is ran. So basically this means how probable it is that in that instance, I actually found the correct ground state. And we see that for three companies, nine qubits, we basically for most of the instances, we found the ground state or have a very, very high probability of finding the ground state. And that changes a little bit when we increase the dimension of the problem and have four and five instances because then we go to a more sparse regime, let's say, where just in a few of the instances, I actually have a non-zero probability of finding the ground states. And specifically for four companies, in this cases that is non-zero, at least is still very high, like 90 a lot percent. But for five, it starts to decrease a little bit and this ranges from 50 to 70 percent for the non-zero instances. So basically you run it a lot of times and you calculate when you find the ground state, right? So what? Yeah, exactly, yeah. Okay, it's the... It's quite a multiple. Yeah, exactly. When you solve the problem, we get a bit string because it's a classical Hamiltonian and bit string will solve the problem. We can decode it and this probability of finding that bit string. Yeah, so for example, in this case, for example, it's 60 percent, so we get 60 percent probability. Over the ground state, perfect, exactly. Yes. This is a simulator with Adam Optimus. Why not? Yes? No, what we do, what we actually do and can do on a quantum device is find the gradients using parameter shift rule and then we can apply Adam because once we have gradients, there is a function to evaluate the gradients. Parameter shift rule is much easier to do in experiments. And that's what we've done here. Yeah, here also we are using parameter shift rule, so also for the simulation because we need to keep track what will happen with the real device. Because, yeah, as I said before, we ran this problem multiple times. First in simulation, but then we also went and ran it one instance of the problem, or like one instance of the problem, specifically for the case of five companies on the emulator, the continuum emulator H11. And here we are showing the result that we got out of it. We're basically, in the x-axis, we have the possible bit strings that our system can, the possible states of our system and the probability of actually that state occur in the emulation. So, and this, the high bar here is actually ground state. So in this run, we found that the ground state with 60 something, 66 around percent probability. And this graph that is being shown inside the picture, is basically the representation of the network with the rective graph that Pieter explained based on this bit string that is the output of the model. Yeah, so just to summarize and give some perspectives, we implemented a variational quantum algorithm for a financial use case. And we saw that we can improve scalability and improve quantum circuit ansatz. We did some actual experiments on the Iquantinum emulator and that can help us prove that the actual, our problem is actually implementable. And for future work, as was mentioned before, this uneven part was only possible to solve this problem for cubal formulations. And we think that we can, we know that with our approach, we can actually run it for beyond cubal problems so we can find higher correlations. And we also want to perform actual experiments on actual devices with quantum. We wanted to run this weekend, but it was not possible. And also another interesting problem is including time dependence so that our actually banjo network is not static, but it varies in time so we can analyze how that affects the approach. And yeah, that was it. Thank you. Thank you very much. Due to, in the interest of time, I'll only take one question. Kiran, do you want to hear it, Konstantinos? You compared the two QAOAs against each other. Yes. What about classical stuff? What about, sorry? Classical methods. Classical methods, with similarity and annealing, classical methods has already been shown complexity of NP-complete. So this problem is NP-complete, learning Bayesian networks. So we didn't go for it because we already knew kind of what complexity it is. And the other reason was we wanted to go just for NISC devices so we have around 20 qubits available. So we thought it would be a better alternative to just stick to making the quantum processes better, so better than QAOA, and then just compare both the quantum methods. Yeah, so we already knew the complexity. One thing to keep in mind is one, okay, one thing is the theoretical complexity which is worst case scenario. Yes. Principle. But here this is in practice. And it's average case. And finite system sizes and stuff like that. So if you were to write a paper on this, you would have to have a classical method to compare against some baseline. To see if you beat it with QAOA and then you need a fair way to compare. Do you have an idea what would that be if you wanted to do it? Yeah, the classical method would be some machine learning, classical machine learning on the Bayesian network. And I mean to be fair and honest, clearly you can take a larger system sizes with classical methods because you see the scaling is actually pretty good, but I mean you can do a way larger number of companies. It would be interesting for sure to like benchmark our quantum methods versus the classical in the regime where we can actually perform the quantum simulations. This would be very interesting. And I think the code is also available. So we can actually do it if we go on working on that. Nice, thanks. I suggest before trying the machine learning, I would just try to do some energy minimization on the classical. Basically we can do simulated annealing because QAOA is... Simulating annealing, yeah. Yes, yes, yes. Sure. Yeah, this would be the first step to go. Yeah. Yeah. Thanks. Five guys. Love the name, by the way. They interrupt, I'll take it. One of you needs to... Start when you're ready, of course. Everything... Okay. Yeah, it's connected. Remember to speak in front of the microphone. No, it's not a good thing. I think we need to see the settings. To the settings. And the buttons. Is it this one that is projected? Yes, this one is perfectly... Orange. Is it supposed to display here before? No. That one and the big screen. Okay, let's put it directly HDMI. Sometimes the connectors are big-shitting. Okay, it's the big screen. Okay. All right. Good morning, everyone. We are the five guys working on the Kwandela Challenge, which is all about the variational quantum against Oliver on photonic devices. So... So we are going to go straight into our problem statement, which is all about finding the ground state of some molecule. So suppose we are given a molecule, maybe, for example, in our case, which we took hydrogen, the hydrogen molecule and the lithium hydride molecule. We want to find the ground state of this molecule. So the problem can be modeled as, in terms of the time-independent Schrodinger equation, which is given in terms of the eigenvalue problem and our Newtonian, which is the electronical Newtonian, is given as such. So we must note that this electronical Newtonian is quite difficult to implement on a classical, on a quantum computer. So we are going to do some steps, which we are not going to align here, which is going to put the Hamiltonian in the form which can be implemented in a quantum computer. So to continue with this, we are going to use the variational quantum and Megan-Solver approach to find the ground state of this Hamiltonian. So what are we going to do? We are going to start with a wave function, which is some given parameters. So we are going to parameterize this wave function and then use it to find the ground state of the Hamiltonian of our system, which we are going to be working on. So this setup here is the setup for the variational quantum Megan-Solver, which we are going to prepare our wave function, which is in terms of the parameters. We are going to prepare it with some quantum gates and we are going to, in terms of some parameters, we are going to be iteratively changing this parameter until we are going to have a better approximation of the ground state of this Hamiltonian. So to continue with our problem, I'm going to give an opportunity for my member to continue. Okay, so to solve this problem, we use a photonic device, which is a bit different as a gate-based, like in the way we have to program it. Indeed, instead of qubits, we... What are the variables? H i, H ij? I mean, we started with the Schengel equation in the continuum, like before. So you have electrons going around, ions, and now they became spins. How did you, what's the encoding? You need to make some mapping with Jordan-Wigner transformation in order to get to this Hamiltonian. Okay, by Jordan-Wigner transformation works on a chain. Yeah, but here you have like... I think you can income this in term of creation and annihilation operators for the electrons with two sides, basically, because you have... Yep. Okay. And for you? Yeah, I'm not completely sure about that, but because you have two electrons in the end for the H2, for example. So do you discretize space in a lattice? Do you use an orbital basis? What do you do? This Hamiltonian was in the description of our problem, so I guess this is the way we have to modelize it. Okay, I don't want to take more time. Please go on. So in it, we're using photonic device. So the way to program a circuit in a photonic device is a bit different. We have... Because we don't have qubits, we have optical modes, and we can see that to create a qubit with a optical device, we can use two modes. And for instance, like the state zero will be... One photon in the first mode, zero photon in the second mode, and state one will be a zero photon in the first mode and one photon in the second mode. Also, we don't have gates, but we have two tools, like one beam splitter and a phase shifter. Both can be parametrized, so a beam splitter can rotate the state and we can parametrize the angle and a phase shifter can have the phase, with a parametrized phase. So to implement the circuit, we first wrote a gate-based version. So the idea is to explore all the space with two qubits. So we created this circuit with seven parameters. And then we found a way to implement it with a photonic device. So instead of two qubits, we have four modes, and we use like a beam splitter and some phase shifters here. And in the middle, we have like a synod gate, which we'll present in the next slide how it's implemented. So about the synod gate, of course, we know that we have to translate from the qubit gates to the photonic gates. So the first way to do a synod gate is of course with four channels, four photonic channels, and this is what we've done in the classical simulation of the device. But when you go to the actual photonic device, there is many, because of the photonic nature, there is many mistakes that are gonna happen, and you're gonna have to check if the result is good, correct, or incorrect. So in order to do that, you need to add, in the synod gates, two additional channels that are here and this one here. And the rest of it is just the synod gate, how it is done with the beam splitters. Okay, and then once we know how to do those synod gates, we can come back to the general architecture of the process, the algorithm. So the first thing is we need to run the circuit on the photonic device. This has to be done multiple times. The reason for that is that we're going to measure the mean value of polis, and if you want to compute the mean value of the X-polis or the Z-polis, for example, you need to run the circuits multiple times. Once this is done, okay, so you run the circuits, and then you compute the mean energy. So in order to do this, you need to have a translation from the photonic device to the qubits, qubit basis, I mean the poli basis. And then once this is done, you just run this function many times and optimize the parameter, right, in order to minimize this energy. Yeah, we can now show our results and actually we can see that our simulation, our expected values from the theoretical model are perfectly fit each other. So we were really glad of getting this plot because behind each one of these points, there is an optimization variation quantum angle, so that there's run and there's given the right answer. And it gives, we see, so we see a very good accordance with the simulation. So these are simulations, these are experiments. Then we tried to use also to run our algorithm also on a QPU by Candela. And actually, like we can see, we tried with noise and without noise that you permit, like through the use of some parameters that you can set when you build up your circuit. And what we got is not really, we tried these for just one ground energy, so just one value. This is the value we were expecting. And we can see that we didn't achieve that because this represents the number of iterations and the value of energy. So we see, we expect the energy to go down to the ground state energy. But we see that we weren't able to achieve this probably because we didn't give enough iteration. So because the circuit implemented that some problems with the parameters. And we have been like studying this problem yesterday night also, but didn't find any answer. So yeah, this is what we got so far. All right. So after doing the hydrogen atom, we want to extend to bigger molecules. So the lithium hydrogen atom. And for the lithium hydrogen, we need four qubits. And one of these little units of unitary, the knot and unitary is a small unit. And we want to figure out where we want to put our CNOT gates so that we use minimal possible CNOT gates because the more CNOTs we introduce into our system, the more complicated the algorithm becomes and the more time it takes to run. And so to choose which qubits we wanted to CNOT gates on, we want to compute some entanglement coefficient to see which qubits are the most entangled. And we use this formula, which is based on Bonn-Newman entropy to compute these coefficients to tell us how entangled any two qubits are. And we found that q1 and q2 are very entangled, q1, q3 and q2 and q3. So the main learnings and challenges from this project for us was learning how to scale to larger molecules because it's more computationally expensive. And also we learned that photonics is a great technology. And also running in the cloud is quite challenging as well because we introduce other problems. We'll take any questions and thank you for listening. So can I ask you a question? Is there a question from the back? For... So do you put state? We should again put an end. We can find it, please. How do you find the end? I think we have some code for it. We kind of took it from one of these papers based on MI. I don't remember what MI stands for. Yeah, I'm not sure if I can answer that question. So, no, we don't know which classical method we use. Yeah, with the real simulation on QPU. Yeah, actually, like our results were not really encouraging because it was taking really long time to take each of these iterations and like a Sun point, it wasn't converging like we see that it starts oscillating. And we think that it was because there are two actual ways to implement the CNOT on a quantum device. For the simulation, we used one, but for the real QPU, we used another one. And I think that that affects the parameter in some kind of way that we weren't able to figure out in time for this presentation. So actually, yeah, we can see that it doesn't converge while if we made also the same plot for the simulation and we saw that like actually the loss function that the energy is always like in this part of the graph. So there were some problem, I think, due to the CNOT gate. Testing, okay. I was a little confused on this plot. So is one the simulator and one is the actual device because it says noise and one says... No, no, no, okay, yeah. This was like to... We ran two simulations on the quantum processor and you can actually set some special parameters with the simulation on the QPU to reduce the noise or not. And so we tried them both. First actually, mostly as a mistake because we didn't see that there were tunable parameters. So we tried with noise and without noise and we see, yeah, that this signal is more clear than this but yeah, this is the main difference between the two. Okay, thanks, I'll close it up. Now it's gonna be team seven now, average dodo and joyers working on a cryptography use case. I need to do this. Where is the presentation? Slight show. So one of the... Slight show. How do I operate that? And if they ask a question, are you able to laugh? Sure. Do we have a pointer? And then use that question at the time. Yeah, it's a code map. We change the... Exactly. I'm not sure I can read it. Okay, so I can change the slide with this? Yes. For Bravis, top one for pointing. Okay, it's not mirror at the screen. You can... Okay. Yes, we need to go back. Close is not mirror. Then I do present... Okay. Okay. Hi everyone. Thank you for being here. We are team average dodo and joyers with our challenge about the quantum cryptography. Presentation will follow these steps. We'll first have a brief introduction about cybersecurity, particularly in the energy sector. And then talk about the advantages of using quantum computing. And then we'll dive into the challenge with our solution. Okay. So as this data shows, the number of attacks have been steadily growing in the last years. And it is forecasted that the total amount, the total loss due to cyber criminals will reach a trillion by 2025. And the energy sector is particularly affected by cyber criminals due to its central role to the world economy. So it is paramount for these companies to be up to date when it comes to cybersecurity. Quantum cryptography may offer an help in providing a powerful tool. Particularly, we would like to use quantum computing to generate randomness. In fact, classical way of... Classical random number generators are not really random. And this may pose a vulnerability that can be exploited. On the other hand, quantum computers and quantum systems are inherently random. And so we would like to leverage this randomness for our scopes. Unfortunately, due to the sensitivity of these systems, noise may disrupt, noise caused both by either by the environment or an adversarial attack, may disrupt this randomness. So we want to create the word to my teammate. Yeah, so now that we've seen a bit about the context, let's dive into the challenge that we had. So quantifying randomness in a noisy quantum circuit. So we provide the randomness, the random numbers with a quantum circuit that is noisy and we try to see how this noisiness affects the randomness of the numbers. So first things first. Why, what is it quantifying randomness? How do we do it? So we don't want just to generate something random. We actually want to be sure that it is random and how random it is. So it is important in quantum cryptography to actually measure the randomness. So you're all thinking we do it with Shannon entropy. The only problem is that Shannon entropy can overestimate the randomness of what you give it. So we actually use here the minimum entropy. So you have the formula here. So it takes the worst case scenario and so that actually interests us for cryptography because we want to know what can happen in the worst case scenario. So yes, so what can we get as a maximum random? We want just a, from distribution outputs. So we do that with just all had a marginal circuit since we're doing it with quantum circuits. So we were given to test the noise, a toy model. So we, okay, we actually had the code of the toy model which provides noise, but we, I mean, we just say that it's a black box and that we tried a few circuits to see how it reacted to it. So first, if we feed it to completely like the circuit circuit we're supposed to provide a uniform distribution. So for example, a whole Adamard circuit, we can see up here what it does when it's, there's no noise and then when we apply the noise, so we see that there's a little difference though. And then we tried with a different circuit that is also supposed to provide a uniform distribution, but that is different. So the noise should maybe act differently on it. And we can sort of see that it's, that it does act differently on it, that this is actually like the noisy part of this is less noisy than the first one. Yes, so we have the, yeah, we had the main entropy we just didn't really put it in for some reason, but we do have these somewhere in them, good. So the previous graph don't seems like to be, it seems that there is no difference between the results from the all Adamard circuits and the Adamard and CNOT gate circuits affected by the noise in the previous graph, there is no differences. And that's what is not what we expected to see. Now I will tell why we doesn't expect that. So we started a noise model and reading the code, we figure out how the noise model works. And all what it does is to nullify the effect of the last single qubit gate for each single qubit. And the probability of doing that is parameterized by the parameter lambda. And this is what we figure out. But so the gates that are not single qubit, but are multiple qubit. Sorry, can you ask a question? I'm a bit confused. So you have, at the beginning you have a circuit which just generates a random string. Uniformly distributed ideally. So with probability one over two to the n, I get one of this. And it's enough to have Adamard gates. Yes, it's enough. But then you started talking about noise. Okay, then we apply a noise model to the circuits and what the toys model does after we figure out what the toys model does by studying experiments and also reading the code of the toy model. And you, have you seen that the results, the previous results from the whole Adamard circuits with the noise. And now just a second, the CNOT gates that connect the qubits to see down here. What are they, why do you need them? Okay, these, the whole Adamard circuits and circuit with Adamard and CNOT are two different circuits that generate a uniform distribution but are different circuits. So we want to see how the noise model affect these two types of circuits and see the differences. Okay, I see. And the, so the... This is the Adamard and CNOT. Okay, the Ancillas are just reading out the qubits. Yes, the classical, right? So that symbol is just a measure. Yes, okay. Very good. All right, so now you have a noise model for your gates so the gates are imperfect in some sense. The toys model is a black box. We don't know, the toys model is implemented in order to have a noise that, you don't know where the noise come from. The noise can become from the single gate measurement. Okay. The single gate itself, from the preparation of the state, from the measurement. And so the toy model is a black box that will ruin your results, stops. So of course, there is, we'll write the code that implement the toy model. We want to also simulate the gate error, but it's not the only error. The toy model is a black box that will only ruin the results of our circles independently of the type of noise. It's a black box. But so what we study the experiments and we read the code and we figure out how this toy model works and the effect is to nullify the last single qubit gate for each single qubit. And the probability of doing that is lambda, okay? So when the noise is applied, it's applied to all the qubits simultaneously. So you can figure out that you can avoid this noise by happening, for example, choose C naught to the same qubit that will result in an identity. But then the circuits will not be affected by the noise because the toy model is this toy model, this toy noise model is quite easy. An example of a circuits that will not affect by this toy model, toy noise model is the GHZ state for three qubits. And we call, so we call this type four of gates, Gandalf gate because the noise will not pass from that point on on the circuits. So this is Gandalf. So the C naught act as a barrier. So why our results don't show differences between the two circuits? The circuit with whole Adamard and circuit with Adamard and C naught. It's because the vanilla implementation of the toys, the noise model, have lambda equal to 0.5. And if we improve the model by changing the value of lambda, we will raise up the probability of applying the noise and we mainly will see a better distinguish between the first circuits and the second one. Sorry, but what we have done is not only this, we also improved the toy, the model noise, the noise model by adding the feature that will add a probability of applying the noise that is independent for each single qubit. Now in the graph, that is the last thing that I will tell to you because I have to pass the, yes, of course. Okay, the graph, this is a graph that will show the experimental results. The orange dots are the experimental results of the circuit with Adamard and C naught. Yes, so Adamard and C naught. But there are, we have questions, so go. Adamard C naught circuits with Adamard and C naught are the dotted circuits, the dotted line. And the line, the red line is the results of the circuit with all the Adamard. And you can see that the circuits with the C naught have a lower bound to the entropy and that is good, no? That is good because this is useful to our task, to have a lower bound to the minimal entropy, okay? The worst case scenario, the minimal entropy. Have a lower bound is really good. Here I go. I'll be very, very quick. So, how do we change the thing? But do I have two points? Can I ask you some help from someone? Come on. Yeah, we're trying, we're trying, we're trying. He's dying, his computer is dying. Can you change the, because? Yeah, I'm trying to go, so there's no way. Technical difficulties, okay. My friend is taking our PC, so maybe we can look at the presentation. Even though the photo is very cute. Move, touch. Let's just put stuff on. Let's put that stuff on. Okay, now it's working, this one. So, I'll be very, very, I will be very, very quick. So, what we talked about so far was the first part of the challenge. Now we go into like a theoretical point of view and we assume that there is not a toy model. There is whatever circuit we want as a noise model. So, how things change. We have to look at the worst case scenario because in cryptography, that's everything that matters, only that. So, if we have any circuit ever that implements any state, which is U, exists a lambda such that epsilon, which is the gate of the noise, undoes what U does. So, what does that mean? H mean is zero, because we only have one state. And that's not good. So, single circuit approach is not enough. We cannot hope to find the circuit that it's perfect. What people do is take random circuits and makes this certificate protocol, which is based on the Bell's inequality. The general approach is quite difficult. We implemented it on the simpler case, which is as U, we take the GHZ state, which is HC naught. And then we take as local measurements for the first qubit, one between X and Z randomly. And for Bob, which is the second qubit, we take X or Z, but followed by a rotation on the Y axis, parameterized by theta. So, what we do? We measure after that. We post-process the statistics and we build the estimator, which is C, which is a function of the expectation values. And we have two bounds. Classical bound is two. Quantum bound is two, square root of two. So, if we have C that is greater than two, we are happy because we are carrying some entropy. We cannot go greater than two, square root of two. That's the maximum. And it carries one bit of entropy. So, GHZ is a good state, but because it's easy to implement, but it doesn't take the maximum entropy. It only takes one bit of entropy, even though it's made of n qubit. So, what we do? We check that Belgian equality are violated. If they are violated, we take the measurements. So, we have a string that is the output of this circuit, this device. Then we have to use a randomness extractor to search between all the string and between the garbage find that bit of entropy, which is truly random. So, we will have as an output of the randomness extractor, a shorter string, but that will be truly random. We use randomness extractor from quantum origin product, which is developed by Quantinum. And I have to stress two things. The first is that this approach is device-independent because we didn't assume anything on the noise model. The second thing is that, although the second part of the challenge is theoretically motivated, we actually implemented the randomness extractor. So, we actually used the package. We can take a string from the device, put it in the extractor, and have a shorter string, which is random. So, super quick. I will just skip maybe a bit and do a direct to our example. We used a case with the cheese H. Okay, inequality are filled, fulfilled, and we see that we have one bit, every two bits of information, and that's what we see. We take our random string that violates perfectly the cheese H inequality, we pass it from random noise extractor, and we get four bits from a string that should have four, but one every two, we get four bits of it for random information. And that's it. That's our Github. We want to thank our mentor, Quantinum, and the ACTP for providing the challenge and organizing this event, and thank you so much. Sorry for the time. Hello. So, you said that this is kind of a device of them, but independent approach in the sense that you don't need to know about the noise model that the machine uses, but here you're running on a simulator, and you produce some bits that we kind of call random, because you violated the bound. So, but obviously no true randomness is generated because you ran on a simulator. So, what do you think was wrong with that? When I say it's completely divisive, independent, what kind of assumption do we add? We assume that the way that we choose a measurement, we're assuming it's completely random, but it must come from somewhere. So, it's not truly random in that sense. So, we must take a starting key that has some, like I explained it here, a bit there. We need to start with a starting key which has some randomness, and we assume that the key is random, but we don't know for sure. You see you ran it on a simulator, so there's no non-determinism in the quantum sense. So, yeah, because we ran it on a real machine. Yeah, like this is a simplified version of it. Like in this case, it's very easy to study, but in theory, we should, if we take our, like ChSHA asymmetros, if they're violated, we should be able to solve that, the bits we're getting out are random. This is an easy case to study because we know the rotation is only in one bit, so it's only one parameter to change, but if we create a completely random circuit, we know that the ChSHA inequality is violated, our bits, what we're getting out is truly random. It's quantum. I think if there's actually what I'm trying to say, you still need to assume that the thing that you're running on is still trying to implement a quantum computer. You can't just use a simulator. Like that's the missing part of the assumption, so it's kind of what we call like a semi-device independent approach. But yes, thank you. Yeah, we thought that some lights were like super weak, but we didn't expect one. This team can start preparing already. Oh, sorry, there is a phone here. Can the next team already set up? The two microphones? The two? It's connected? It's mirrored, so you can see. You can see, you can see, you can see. You can give the pointer, go forward, with the button, right, you can give the pointer. But I can go with this also. You can give the pointer. Yeah, let's use this. Just form on the background. Okay, very good. Ah, okay. I remember your... Okay, I can speak to my name. That's if you need. I remember your recording and no live streaming on YouTube. Streaming on YouTube? No. All right. Presentation by Team Nine. Tikitakatikitiket is starting now. The floor is yours. Good morning to all. So the name of our team is Tikitakatikitiket, and we would like to give special thanks to Callum, because he had to invest a lot with us, and also to Luca. He was not our mentor, but he helped us a lot, and so this was our first quantum experiment, or you say quantum project that we guys did. We are from Amhfisity, and so the title of our presentation is Tifoli decomposition for a grower or oracle. So we did the Tifoli decomposition using a blog post, and then we compared with the one with Tikit, and then we had to write an oracle from scratch, and then we had to do the grower's algorithm. So the thing that motivated us that multi-control gates, they must be compiled to single and two-qubit gates before executing quantum algorithms on real devices, because it's difficult to implement multi-control gates in backend, so we had to decompose them using single-qubit and two-qubit system, and then there's an efficient implementation of algorithm depends on optimizing the decomposition of multi-control gates, and so if we are not able to optimize it nicely, then we need to increase number of gates, which can lead to more noise, and even the depth of the circuit increases. So the real-life application that we guys also did was writing an oracle and doing the implementation in grower's algorithm, and so we were more focused, like we had two parts, the first part was more focused on optimizing the Tifoli gates, and the second was on writing oracle and implementing on grower's algorithm. So the task was divided into two categories. One was like optimizing the decomposition of Tifoli gates using Craig's blog post, which was suggested by Callum, and the second one was finding a nice and smart oracle to implement in grower's algorithm. So we had to start with doing the decomposition on using pen and paper, then we did the classical approach using Anselia with using the same Craig's blog post, and then we did the quantum approach where we used the quantum circuit simulations as root n, and then we compared the implementation with ticket. So we proceed with our first approach. So first we did with the classical Anselia approach, and then we found that it was leading to more cubits, which was contributing more noise. So we had to go to the quantum approach where we took the operations as root n for quantum circuits, and so what we did is breaking the n controls into sub-controls, and so the sub-controls they have less, like the sub-process they have less controls, and we recursively implement the function again and again, which leads to less number of gates and which leads to less noise. And so this was the result that what we could implement was first was the one that we can see on the left was the implementation of the blog post where we could see that this is an intermediate step that we could implement for two cubic systems, and the depth of the circuit seems to be five and the number of gates in the circuit total using the blog post was five and the note gates that we could find was two. And this was an intermediate step, and after that this one was with the ticket which was decomposition with ticket. So if you have less number of cubits, then this seems to be fine, but if your gate is like you have more number of cubits then it becomes more complicated because you have more cubits. So it's easy to decompose in the form of this approach. So now that we have optimized the gates we would like to apply to a real problem and it was suggested to solve the travelling salesman problem on a quantum computer. So as you know the travelling salesman problem you have a bunch of cities connected in some way and you want to visit each city once and only once. Now what you can do is a brute force search. You can just do all paths and check if the path exists and if it is a path of visit each city once and only once. And being a brute force search it's very amenable to put on a quantum computer using Grover's algorithm. So Grover's algorithm gives you in a brute force search problem like for example the travelling salesman problem you have a quadratic speed up with respect to the classical brute force search. And what you need to do for this is you have one oracle which tells you you give it a path and it tells you if the path is good or it's not and then you have the amplification or the diffuser. I went too far. So the difficult part here is to build the oracle because the diffuser is basically the same for any Grover's algorithm and to build the oracle you need to build really it's very simple. You give it a path, it checks if the path is good and if it's good, that's it. If no, it says no. But now you need to implement this on a quantum circuit using unitary gates and doing it all by hand only with thoughtfully gates. It's a complete disaster because how do you check that the path exists that it's good, you need to input the graph that you have for the cities. It's terrible. So what do you do? And this is the circuit we get. So this is the circuit you get for, in this case, four cities that you want to visit. And this is just the oracle. Then you plug it into the diffuser and you have your Grover's algorithm. If you want to do it for five cities to simulate all the qubits you need about a terabyte of RAM so we couldn't do it. The graph, whatever you want. So the graph impacts the circuit itself, the connectivity of the circuit. So this one is for basically a square. Four cities and connected like this. So depending on your graph, you get a different circuit. Yes, any graph. Any one. So the number of qubits is the same for any graph. The graph itself just implements, I mean just impacts the connections we have. I don't know. Probably one of the last ones. But okay, anyway. So once you have this, you can... Sorry? You have a slide where you show feet. No, no, no, no. So anyway, once you have this, you can optimize it using the optimization they described before. And then plug it into the diffuser and you have your Grover's algorithm. Maybe you can tell him some details about... You want to check all parts at once. So you want to take all the possible paths between, let's say, four vertices that pass through all the city and first you want to put to encode this path. So the way we do it is each city corresponds to some number. And the idea was first to basically extract the number of the city from a bit representation implemented with qubits. So extract the number of the cities and then create all the paths. So some of this path will just not fit in the graph because it's so major. Yeah. Okay, we just considered equal weights. Yes, I'm sorry. Maybe we're talking about different problems because we are not looking for a minimizer of the paths, but just of the existence of a path connecting all the cities. Yes, exactly. Yes. Yes. So we haven't had time to go to the minimization part and putting all the weights, but that is a bit tricky. So we discussed with our mentors and it's not that you can really minimize once you assign a weight to all of these because it's very difficult to do on a quantum device. So what you should do is maybe put some threshold and check that we can do and check if this graph the weight percent of this graph is less or more than this threshold. And so you can isolate the paths which have a weight less than your threshold. But to find the actual minimum, don't think it's possible. Yes. Yes. But with the goal of ... Yes, it's already something, right? Yes. So we have five minutes of questions. A bit less actually, four minutes. Can Luca ask a question, is that fine? I'd like to ask a question about when you compare the compositions. If you go under the composition slide, you're the composition with tickets. Can you maybe tell me a bit more what you take away from this comparison you're making here? This one is not the comparison, but like it's an intermediate step that we could do with that block was. And then we have to apply the decomposition with ticket one. Because we couldn't finish this problem. Because the problem here is that you're comparing apples with pairs, right? Because the gates using on the left they're more powerful than the gates using on the right. Yes. So these are the difficulties that we need to decompose into the other single qubit gates. First we leave all questions at the end for the speakers please. So the intuition on the circuit, can you give me some insight there? How does that reflect the graph? Yeah. So maybe I'm going the wrong direction. Three, four sets of qubits. So I was about saying first of all you want to represent each city bitwise. So basically the number of cities is given from the beginning. So you can bound the bit you need. And basically you just check one by one with some scene of gates. Either as this city corresponds to the number zero. If the bit string corresponds to the number zero. Then you invert some of these qubits. This will correspond to the number maybe one. And you keep doing some inversion. And basically you assign to each city the corresponding number in decimal notation. Yes. So basically you have some qubits or some set of qubits representing all the cities. And to each of these you will have as many as the number of the cities. And with these sorry. Sorry I can't get you. Yeah. I mean I can be very precise with the number of qubits. It is something like n squared plus n log n plus 4 joules. So basically in this way you will shoot just one of the unsealed which will be the one corresponding to the exact number of the city for each city. In this way you can identify basically which number are you looking at. So of all these unsealed just one these will be shot and will be the identifier for the number of the city. So this is just for encoding one path. The first thing you want to check is that this path is Hamiltonian. So that pass through every city. So what you want to ask is that for the unsealed number at zero one etc. for all unsealed number at zero there is exactly one that has been shot. That is exactly one that has been shot for the unsealed number that has one. So you can do it with a scene of gate as well and save the result in some unsealed and unsealed so you will need another n of qubits. So at this point you just isolated all the Hamiltonian paths from your bit strings but now you want to check that they can actually fit your graph. And these you can do just looking at the adjacent symmetric. So the adjacent symmetric is your input. This is just classical, just a small matrix and basically you check for example this one maybe something like this am I going out of line? Basically you check all the edges that are present in the path and you try to connect the two cities with the edge. So this will isolate all the paths that can fit in the graph. And basically you will need some other edges to some other unsealed qubits that correspond to the edges. So now on one side you check that the path is Hamiltonian on the other side you check that the path actually fits in the graph and then you just put a scene of gate between these two checks and you actually get if the path exists or not. A lot of scene of which are then solved by the girls team. Okay, one very quick question and very quick answer please. So again, so we compiled this circuit on Pitek on a ticket and it works. Better than better than no, we didn't check that. The answer is we haven't checked the path the timing versus the paper. Okay, all good. Thank you very much. Next team, team 10. It's here. But there is no full screen. It's not showing B. I'm going to move to like this. Yeah, five minutes. At five and eight I guess. Five and eight. Yes. Use this for pointing, right? This. So hello everyone. We are team 10 and we are a team called for CRELOV if you're into call of duty and we worked on quantum CRELOV methods to find out ground energies of molecules and we were mentored by Nathan Fiss-Patrick from Quantenium. So since we're working with finding the ground energies of molecules, we will need Hamiltonian to begin. So we use our Hamiltonian for hydrogen molecule and we had second-quantist harmonic Hamiltonian and then we used in-quanta from Quantenium to convert it to our weighted combination of Paulis that is basically the qubit encoding. So we will also need a reference state. For us, it's a simple reference state of 1100 ket that we get from the circuit of two simple note gates. Then, so this is our quantum subspace CRELOV method, right? So as you can see, basically it's a hybrid algorithm where you use the classical portion here to calculate the eigenvalue problem and we use to populate the matrices from F and so the H matrix and S matrix using the quantum computer. So basically as you see from the equations here and eigenvector here it's the eigenvector for the usual standard problem is 2n into 1, right? And for a quantum subspace you divide it into subspace so you have and you have a much smaller subspace of m into 1. So here m is significantly lower than 2n, right? So you already can guess the advantage here. So and also for the novelty of our approach we are the method we are implementing are quite new. It's both of the papers in 2022 and it's 2023 so you can see there aren't many, too many references or tutorials lying around for us to play with. So it's quite novel. So basically as you can see finding the energies is a generalized eigenvalue problem that we solve for Hc equals to Esc where H is the Hamiltonian, C is the eigenvector and S being the overlap matrix and from E we get the energies of the molecules we are looking for. So these scales is combinatorial scaling but kilo methods are found around the scaling. So for the Hamilton we have two inputs of H and S and we get the output of E and C where we use the E for finding the eigenvalues, right? So kilo method is basically a weighted linear combination of the function powers of function of H applied to our reference state that is psi not ket, right? Then if you see we use that powers of function of H and use it for time evolution so we need both imaginary time evolution. So as you can see for the standard kilo we get we measure the expectation of both H Hamiltonian matrix and the overlap matrix and the difference between the two matrix are basically H you have the Hamiltonian and S you don't have the Hamiltonian. Yeah, we generate a subspace here. Yeah, that's for the unitary kilo method we will be getting that to our next slides. It's included in our slides. So we are talking about standard kilo method and unitary kilo method is for the quantum computers. Yeah. Yeah. Yeah, the exponential is better on quantum computer. We will get to unitary kilo in the next slide. So you apply unitary kilo methods for quantum computers but standard is for classical solvers. So as we can see the complexity seems to matrix element calculations so that's our old difficulty and from the the M and N are the indices of the elements of the matrices. So we are calculating the matrix elements and so the indices are the powers of our the function of our H, right? And you can see here this is the unitary kilo which we implement on the quantum computer and as we can see it significantly reduces the number of elements compared to the standard kilo as we can see in the next slide we also use this error view so it helps us to like visualize what's happening and also the dimensions of the operators that we are using to see that since we are exponentially changing the Hamiltonian the number of elements are lower and that's what we wanted to show with this error view a pictorial intuitive view to see the advantage. And this is the overlap matrix S and you can also see how the dimensions there are and as I mentioned before there is no Hamiltonian in the S matrix and we now also see how the kilo of basis actually is portrayed pictorially with the evolution operators that is the functions of H to the power on the reference state psi naught and then we get psi 1, psi 2 and we add them up so it's a linear weighted linear combinations of the kilo of basis psi naught, psi 1, psi 2, right? And you can see how those operates and how the boxes of the time evolution scales with the powers and we get the powers from the indices of the matrix, right? So as we said it's a time evolution operator, right? Real and imaginary time evolution operator and for our case we use totalization method. So for totalization method you basically get slices of the time and you operate them consecutively on one after another and to get the total steps each of the total step we get from our Hamiltonian itself so you have H alpha, P alpha summation, right? So for each H alpha, P alpha you get one time slice and you apply them all so you actually get two slices, depends on the number of times in the Hamiltonian, the Pauli matrices, right? So, to calculate the expected values which is the Hallamard test and this is important because we can get the real and the imaginary Hallamard values depending on the phase KW we have here so for imaginary expectation value we apply an S phase and for the real expectation value we don't apply a phase gate at all. Here is just how we get the counts, and you can see that we can get the elements by applying these equations that should get us the real and imaginary expected values, and when we add them up together with a face factor that we get the result that we want. So this is for the simulator. And for exponent shading, which I think is kind of the most important part of the project, we use a poly gadget. So this circuit implements an exponential function that has a poly string in its parameter. So this is the circuit primitive, and you can see how we use basis changes, depending on the gates that we have in our poly string. And this is the one that we made in Ticket. So in Ticket, there's a function called PolyXBox that allows you to do this circuit, but we made our own routine. So this circuit we made ourselves, and this is just one case for a specific poly string. We have a linear combination of exponential poly strings, sorry, we have to make a poly gadget for each one and then append them. And you can see that our poly gadget is equivalent to using the poly exponential box, which you would use with a quantum control box, which would be used for a control unitary in the Hadamard test circuit, but the top three steps can be done by Ticket, but our approach is more, it's significantly simpler because we just made the circuit specific for our problem. You can see how for a crylops depth three, for the ket three, or ket two, for example, we just apply the circuit three times. So this is just for easy visualization. We also compiled it using Ticket's inbuilt circuit compiler to reduce the, to remove some redundancies that might arise in the circuit. And you can see how we had the Hadamard test here in order to perform the Hadamard test. So for authorization, we just, well, this is what I explained previously. There's quite a difference between our circuits and this is due to the indices being from top to bottom. So we start at zero at top and then at the bottom. And these indices start from zero at the bottom and they go, it just flips the circuit. But you can see how the scaling of this is kind of a bit harder to do. The scaling has nothing to do with crylopids more with the totalization algorithm. So if there's a, with the phase estimation algorithm. So the number of cubic gates, it corresponds to two N plus one, where N is the number of poly terms in one element of the cubic Hamiltonian. It's the number of the poly string. This is just the worst case because there are Hamiltonians, there are terms in the Hamiltonian that don't include all the qubits for the poly string. So for example, in this one, we have three poly matrices or well, in this one we have four, but this is just one case where it's, every poly gate is used. Okay, so to compare results, we used exact diagonalization. And since this was a sparse matrix, it was a kind of a very useful thing to do and not really costly to simulate, but this is not really the point. This is just a comparison for the eigenstate and the various excited states of H2 molecule. We did a sci-fi approximation. I will talk about some of the problems that we arose here, but basically we tried to use sci-fi to do the exponentiation of the matrices. This is how fast they converged. So depending on the classical trotter length, you can see how the interatomic distance varies. This is the derivative plot for the eigenstate. So each point here is the final eigenstate that it has converged to and you can see how fast it tries to converge using sci-fi. Again, this is just for simulation. We also used a variational quantum eigen solver in order to try to approximate a better state than our 1100 ansets. And we ran it in quantum backend and noisy in quantum backend as well. However, for the summary, we see that the, well, we have here the exact eigenvalues and now for using 1100 ansets using sci-fi, we got one excited state for free. We have some problems with the dependence of the H matrices. So there were some sort of issues that arose that the matrix were not really her mission and that depends on the subspace. Okay, I'll wrap it up. Sorry, this has to be supposed to be a minus one. We didn't really get a correct result using a quantum simulation. We think it's because of the trotter code that they didn't really implement correctly, but this is more of a, yeah, I explained it. It is more of a future step. So we could fix the trotter and then to get the comparable results to sci-fi and the traditional VQ results. And you can see that this is the qubit Hamiltonian that we originally get and we tried to get like a section in using Krylov method, but it kind of failed in the quantum implementation and in the sci-fi implementation as well. It has something to do with the Krylov space symmetries and that the symmetry preserving nature of the imaginary and real time evolution operators. These were the references that we used and we would like to really thank our mentor, Nathan Fitzpatrick and ICTP and continue. So yes, if you have any questions. Thank you very much. Priority to the jury for questions and if they have none, then the public can also take some. Two minutes for questions. We come, Boca, go back to the question you asked before or I'm sorry, could you repeat? I couldn't get it before. Oh, I don't remember it. It was something about the function with the explanation, right? Yeah, yeah. So I was talking about exact, but then you divert it a lot from there. So I'm not sure that's an issue anymore. So in some sense, on a classical computer, you don't do the Krylov method with F of H which is more than a low order polynomial because simply that is, you know. So you apply H on a vector and the gain is in the fact that when you apply H to a vector, you get the vector which is not, it does not, you know, it's in a particular subspace, okay. And if H, if you take high powers of H, and this is due to the fact that H is sparse, so typical physical H is sparse because there is locality because of everything. I mean, it's difficult to have an H which is fully connected in real physics examples. And which is a dense matrix, okay. I was thinking about the graph. But what you're doing here with Krylov on a quantum computer is more like phase estimation, in some sense. Using the fact that E to the minus I H T on a quantum computer is a natural thing. Yeah, it's like we have to propagate the thing. And the use of unitary Krylov is that we don't have to do this matrix multiplication as with standard Krylov. So we can just put it inside the exponential, the phase standard and those terms. So for a quantum computer, probably H to the fourth would be also difficult thing to do. Yes, some of the... It's either H or E to the minus I H T. I think some of our references in here like outline H four and H six, and they do outline that it becomes somewhat difficult for the quantum computer. But the main, I think the main like future work could be done into a phase estimation algorithm instead of using the Krylov method. The Krylov method offers, like we found out that it offers a significant advantage in combining the classical and quantum approach because you have a smaller basis, you have a smaller matrix to die out on a list. But yes, you can see it's very new research, but we hope to continue on with the project. We like it very much. Okay. I have maybe just a brief question. You mentioned that you used a ticket, I'm not a ticket person, so I'm not necessarily fishing for endorsement. But I was just curious, how much did that, do you know how much that helped optimize the circuit? We tried, quick question. We couldn't really get some quantum, like these matrices you see here are not, like they're so dependent that we couldn't really get an estimate for the quantum approach. But we tried using compilers and not compiling the gates and we couldn't really find an advantage. Like it did very so fast, the optimization that we didn't really find an advantage in the computation time per se. But for the results, we cannot say if it compiled correctly or if it didn't compile correctly because we didn't get good results that approximated the side-by-simulation. Yes, yes, it's hand-coded. Yeah, but by ticket, it compiled some of the things, but the circuits became so large that we just couldn't really even include them in the presentation. Good, thank you very much. Thank you. Now, team 15, quantum firecats. Okay, you have the pointer? Okay, you need to change the slide with the... Ah, okay. Okay. Okay, good morning everyone. I hope don't be so tired right now. Okay, we are team 15 and we are quantum firecats. Our project consists in quantum error correction and our mentor was Dr. Ben Crier. Okay, sorry. Our motivation. All day in quantum computing, we have a lot of errors, right? So we, all the community put too much effort to trying to fix these errors or compensate these errors. If you, if we remember in the slices of error, we say the quantum computing have one error, an average, one error per 10 to the three operations. So this is a huge error because we made a lot of computation. So that is the motivation of this project. And as well, we increase the confiability if we put too much effort in this quantum error correction, we increase the confiability of the results of these algorithms. Okay, our main task. The implement, what's the implementation of the optimal and efficient decoding of concatenate quantum blocks using Python and this is the QR code is the reference of we take for the implement these quantum corrections. Okay, our main goal as well is to reproduce the following results that we found in this reference or the describe of the system. It's, we have five qubits and with different levels of concatenations. We explain later what's mean concatenations and using a Monte Carlo sampling to generate these errors. And after that, with difference, the polarization rates. And we try to get the same results for, no, we try to get the same results for probability of error, error news decoding. So with these tools, we try to fix our problems, the main problems in quantum computing. And a spoiler, we replicate this data. This plot will explain a little bit later. So imagine that we want to apply this, our implementation of the most efficient and the optimal implementation of a decoder to solve the issue of having a noise, having a message that is subject to a noise. Imagine that Alice want to talk to Bob. So Alice will send a message that is a quantum state and we need then the message passed to the decoder then it will be subject to a noise and then the decoder helps Bob to get the message. But we don't know if Bob will get the proper message now. So the current information. We will understand because we know that the power operator will out on the quantum message. And if the power operator will be the the identity matrix, then Bob will be able to get the message. Otherwise, he will get the wrong message. And given a certain set of pure error, stabilizer and logical power operator, then we develop the algorithm that wants as should be the most efficient at the optimal realization or decoder of a different layer. Now I let you, you're not supposed to explain. Okay, so sorry, it is going to be a little bit technical because we're programmers, we have to talk about details. So when we have a concatenation algorithm, the idea is that we take the system and we encode it in a bigger system and like this we can reduce the error. So each level in the tree, it will produce the encoding level. Here, this is the tree with only like three nodes because well, it's impossible to draw five on the picture, it looks way too crowded. So just imagine that you see five. So we did it on the five qubits, that's why it's five. And our, okay, so the idea is that we work on five qubits. So we put them through that function that we didn't write on the slide because it's too complicated, no one's going to pay attention anyway, it's in the article. We work with our five qubits and then we pass the information to the next level. Basically, this concatenation that the article was discussed was using this message pass an algorithm meaning that what we pass is the state that we calculated like knowing the initial error for the simulator and we pass the probabilities that the decoder guessed, basically. And when you ask this question, what's the difference between like the straight line and the dash there? So here, one of them was when you pass all the probabilities at each level and then you use them in the next algorithm. And the other was that you take like the maximum of them and you only set that to one and then the rest is zero. So it becomes a different algorithm, a little bit different and it scales worse and so it's better to pass the probabilities. So that was what we were implementing with passing the probabilities. Now, when you look at this tree, so well, power five, you can imagine how fast this grows. There is no way you can store this thing in memory. So we had to design like the way to go around the tree but not store this. So basically, as I showed, we started with just this. We pass one message to the next level, then we go here, we pass one more message. You see now we have like two things at the next level and basically in this algorithm, we never have to store any more than just the number of qubits per level of the information. So even though this is exponentially many things, you don't need terabytes of memory. You only need just one matrix to store those five per level. Well, you just go around, once you got five at this level, you pass it further and then you just continue. You go down and you continue passing like this. At the end, you have some result. So you have for this state that you calculated from your initial error things and you have what the decoder gets the probability. So basically you can compare them. You can say, did it work? Didn't work or did it not work? So then what do you do to estimate how this actually works? So you do the Monte Carlo simulation. Yeah, so in one run, we introduced an error we know what is the error and then we compare if our decoder is able to get to find the error or not. Then we do this many times. This is the Monte Carlo part and we count how many misses we had, how many times the decoder was not able to find the errors and we estimated the probability of error. It's the graph that we reproduced that we were able to do as in the reference. So in the first column we have for zero, we have the physical error and then here are the levels of concatenation. What is the best result? The best result is that my decoder finds all errors. So we have a probability of error zero and this is able, we can see, we are able to achieve this to certain levels. If the level of error is greater than this, the algorithm is not able to do. We also timed our algorithm for different concatenation levels and we observed that it increases exponentially and this is what we expect because with each concatenation level, the data we are dealing with increases exponentially as well. So let's go on to the last two slides. So in conclusion, we observed that our algorithm scales linearly with the number of qubits as we expect and it varies exponentially with the number of layers as expected again, so it's good. We were able to reproduce the exact same results that we were supposed to reproduce and we are successfully getting the exact scaling behavior that we wanted to get. So let's move on to the new things that we added in our implementation. First is we implemented our tree algorithm and this algorithm has never been done in this way so we are the first ones to implement the tree algorithm. The major advantage for our tree algorithm is that you don't have to deal with terabytes of data in one time but you can take just five blocks of memory and people usually do this in different ways but we did it by implementing a tree and secondly, this is the first and also this tree can also be used in other applications not just this application. So other industrial applications can use this tree. Secondly, this is the first and the only Python code which does this so we can even make a module out of this after cleaning it up a bit and optimizing it. Thirdly, since all of us come from a HPC background we believe that this code can be optimized so much better because we just had two days so we just wrote it in Python and we could do it in CRC++ because Python has nasty way of memory allocation so and implementing a tree in Python is worse. So I feel our estimate is that it will at least be 20 times faster if we do it on CRC++ and with this we would like to thank all the organizers, jury, and all of you. Thank you. Thank you very much. We have very good timekeeping by the way. We have five minutes for questions. I was curious about the noise model. How did you introduce the noise? Was it like the code capacity or the circuit level or I don't know if you know those terminologies but could you explain more about the noise model? All right, I don't know if we know these terminologies because we are not really specialists in quantum computing. So we're given an algorithm that has some God-given input. So noise is somewhere there in this God-given input and then we get stabilizers for this five qubits. We worked on five qubits but this algorithm can be implemented on whatever. Someone who is the quantum computer scientist should give us how many qubits, what they come from, whatever history it is and then we encode these stabilizers and all the pure error, whatever other things there are and run the code. So I cannot really answer about the noise because this is a question for quantum computing scientists. When we use a random uniform generator with that probability of 0.1, 0.50. Yeah, this. So you have like some qubits, you apply some noise to it and then you measure the operators, the parity checks, and for those. It's like, well, this is just for error correction. So you have these qubits and then you reduce the error randomly, basically. Yeah. But with the thing, you have the initial, you have the probability of these qubits to be like an error one, error. And then this error, it just propagates. Okay. And you just, yeah, okay. Do you know what sort of simulator you used to calculate these things? Yeah, they're so simulators. I mean, so, okay, the tree goes like together with a simulator that gets this error thing and it knows more what it should be because you just multiply power matrices, right? So at the end, you get what your simulator has and what your error decoder gets at the same time. So like both the decoder and the simulator, they're inside the stream. Okay, very cool. So you find that your stabilizer, your error correction has a threshold. Yeah. Do you, did you develop, you didn't do a systematic study, but did you develop an intuition how this threshold should go with the number of qubits? Because you need to really implement this as a real task and play the number of qubits then instead of five, you take six or whatever. It's a good master thesis for us. It will take a lot of time to implement, not in one day. So that's why I wouldn't say I have an intuition. There's something will change, sure. Because there's no time. I understand, but I was wondering whether you developed an intuition, okay? This looks like a percolation problem in some sense on the tree. So maybe, you know, if you thought, also the threshold is like 1.18, which is more or less one fifth. This is probably a good question for Ben who works on this. Thank you very much. So last but not least, Eigen criminals. All three of us, all of us, yes, here. Two minutes, two minutes, two minutes, two minutes. Got it, thank you. What the heck just happened? Something just... Okay, I lost everything. Oh, here it is, okay, there we go. If you have one last thing? Next time, last time, let's do that. No? Hold up, it's giving me the spinning wheel of death. I will shortly move this over. There. It's already very big on the screen, so I'll just... Yeah, go really full screen. I have to adjust the sound that every time you... I'll just push this. Yeah, okay. So can you try it on the back center? It's better to present it by normally. I don't know if I go here. It's better to just use the clicker for this. Yeah, yeah, yeah, yeah, okay. I can take it back if you want, okay. Okay, so... Just click it, is it just side buttons? You need to connect to the USB key. Okay, so... That's okay, we can just use the USB key. I'll point it to you. Okay, it's working, yeah. There we go. Be a piece of screen. Awesome, thank you. Testing, testing. Yes, sir. Awesome. Let me start again. Oh, yeah. All right, hello everybody. My name is Sakit, this is Kevin and Bao. We are the Eigen criminals. And I know you guys have heard quite a few machine learning presentations today. I know there was some VQE, some QAOA, all that good stuff. I'm here to tell you we're not doing any of that. We are doing filtering VQE. Sorry. But yeah, before we start, just wanted to give a huge thank you to Gabriel for guiding us through this, teaching us the basics of FVQE and allowing us to explore and make our own mistakes and discoveries. Yeah, so we, first of all, before we talk about our process with VQE, we would talk about what exactly FVQE is. So the problem is that we have a probabilistic distribution of the Hamiltonians of a system and we want to reach the ground state with a high probability. So as you can see, we perform, in theory, a transform that takes us from a low probability of the ground state to a high probability of the ground state. But we call this transform a filter, but for a quantum machine, since it's not directly always possible to apply the exact filter on the system, we use a quantum variational algorithm to incrementally modify the landscape of the algorithm. In doing so, we use a cost function. By building a cost function, we can optimize the amount by which our ground state probability tends towards the maximum at each step. And yeah, now for our approach to the project. To begin, building the cost function, we started by solving the weighted max cut problem. We started by solving the weighted max cut problem. We took the cost function of the problem integrated into the FVQE using a software called QJAX, which is good for gradients. And we developed Ansatz for our circuit. So we could initialize our circuit before we implemented FVQE, after which we implemented parameter shift, both the theoretical exact parameter shift and an estimation via circuit sampling. After that, we kind of explored FVQE. We looked at the different Ansatz and problem sizes and how well they scale. We looked at various filtering functions to see if we could compose multiple filtering functions together to create a better one. We also looked at multiple hyperparameters, such as tau, which is a hyperparameter in the filtering function and the learning rate of the VQE itself. And we also capped us off by comparing our solution between a noiseless system and a noisy system. So we tested multiple backends as well. So we started with Inessence. Well, we used the variational quantum algorithm that gives us a simulation of, a step-wise simulation of this transform. And step-by-step, we apply gradient descent and see how well that maps over to this transform. And incrementally, over multiple steps, it simulates something as close as we can approximate to the transform we're looking for. So the filter is, the filter inherently is a function, right? But the function, what we use is we build that function into the variational algorithm so that we kind of give it a step-wise growth. You could do a better job explaining the first slide. Yeah, that's one, so. I wanna go ahead and explain it. I think I'm not communicating this one. So this function is you. Yes. So the f function shown in this map is unitary. The filtering function overall may not necessarily be unitary, but we break it down. Yeah, we don't implement the exact function. We implement an approximation within the VQE. We use the cost function to kind of map it out in a similar manner. Yeah, that approximates the filtering function. Yeah, that's right. So we started off by solving the max cut problem. We chose the max cut problem because it's a simple combinatorial optimization problem and therefore it has a very simple cost function to calculate. In theory, this is just a black box because in order to initialize the FVQE, we just need some sort of black box function that maps us from X to a function of X. So we used the max cut cost function as a black box with which we'd apply the filtering VQE. We made a graph with weights. We applied the cost function we solved for the optimal solutions and we took all the optimal solutions, scaled them between zero and one and created a Hamiltonian in a sense from that and the lowest entry on that was treated as the ground state, which we were trying to optimize the probability for, maximize the probability for. After that, we moved over to creating the FVQE circuit itself, which started off with the ansatz. For this, I will hand stuff to Kevin. Okay, well it turns out that actually we need the ansatz for these to implement the FVQE. So in order to do that, we choose these three ansatz that was no selected randomly. We actually based on this paper because these three particular ansatz have different levels of expressability in which we can compare or use like some sort of evaluation the performance of the FVQE. Okay, so, so let's move, let's move on to, can I change that? Okay, okay, so this is the result for the first part of the ansatz. So as you can see here, we actually implemented for a few, a few number of qubits, just three, five, seven, nine and 11 and 13. And as you can see actually in all of these cases, successfully converts and we just like have a decreasing of the rate of the conversion rate in order as we increase the number of qubits. So this is something that actually we take care in the following implementation. Also one thing that I want to point out and in the next slide, I will do it again is this kind of fluctuation at the end of the probability of the ungrown state that as soon as they arrive, well this can be modified just changing the learning rate. Okay, okay. So this is our result for the other two ansatz, the first one and the other two. And I want to see here that I want to show here that actually for the second ansatz that was the ansatz that has more expressibility, they has a better, in this case, convergence rate because it has more parameters so a more expressibility. So we can tend to reach the grown state probability much faster, okay. Okay, so what happened when we do this kind of circuits a little bit more bigger? So here you can see that using 19 qubits, we are simulating the ansatz one, two and three with different kind of layers and also I think that you can see here the number of parameters that each ansatz require for given a number of layers. So it turns out that actually for the case of the ansatz two that is when it has like a 418 parameters so we just simulate in this case just one layer because it has a lot of time required to implement this kind of things. Okay, so as you can see here actually we don't have any kind of problem in case we increase the number of qubits but it takes a little bit more time. So let's move on here, so let me see. So what about the filters? So in this case I will return this stage to Saquette. Well it's not filtering VQE unless we have filters. So what about the filters, right? The most commonly used filters were the inverse exponential and cosine filter functions. So those were the ones we tried simulating with the cost function, the inverse easily outperformed the other two. We decided to experiment a little bit more and wondered what would happen if we tried composing a filtering function that stacked multiple of these. So for example, as you can see in the graph here the orange line here is a composition of the inverse and the cosine filter functions and the green line here is a composition of the inverse and the exponential filter functions. As we can see compared to all of these just the simple inverse function was the most efficient to map towards as it not only converged to a high probability the fastest but it was the only one that lasted for more than seven qubits. Afterwards all of the other composition functions tended towards a zero probability for the ground state. After we finished comparing the filters we moved on to hyperparameter comparisons. The first we tested was the tau hyperparameter that was in the filtering functions. We tested this for a range of values from 0.01 to five. Our conclusion was pretty straightforward. It showed that a high tau would lead to a faster growth in the curve and therefore it would reach a high probability faster but at the same time no matter what value of tau we used for the most part the system was pretty robust in handling the change. We also tried with variable tau's so we built this in such a way that the tau would adjust in a sense as the probabilities changed we built it as an adjustment function of the cost function and even with that up to 19 qubits we saw no difference at all in the growth of the curves. After we checked the tau we also checked out the learning rates. Unlike the tau we saw a great deal of difference with these. For consistent learning rates we tested from 0.1 through 1. As you can see over here the lower learning rates didn't converge quickly enough about 0.35 was a very good convergence. Anything over that the system would over correct itself and fluctuate a lot as it got near a high probability rate. In theory the highest rate possible would be pi over two. Anything beyond that would converge pretty much immediately to a zero probability. But we were able to figure out a way to stabilize up top by creating an adaptive learning rate that was a function of the cost function and the learning rate itself. So we realized that this works really well. The optimal learning rate that worked for this was a 0.9 learning rate. After this we tested on noisy and noiseless systems for, we used the IBM Q Hanoi noisy back end to simulate. We did noisy simulations for three qubits, five qubits, seven qubits, nine qubits and 11 qubits for the ansatz that worked the best earlier on since it was too big for multiple qubits on a noisy back and we only tested for three and five qubits. The rest we used ansatz one and three. As we can see as the number of qubits increases the adaptive learning rate did function the best out of everything. And I think that's the conclusion is we were able to build a simulation of the filtering functions using FVQE. Creating an adaptive learning rate gave us a very good convergence towards a high probability whereas other variables were a little bit less rewarding. Thank you. Thank you very much. That leaves us, let's say, okay, two minutes for questions. Can we go back to how the filter is applied? This was a classical problem. The Hamiltonian is diagonal, all right? Yes. Could you apply this filter technique to a quantum Hamiltonian? To a quantum Hamiltonian? Yeah. I, in theory, I don't see why not in practice we haven't tested it out yet. That's actually something we were... It's an in theory question. Is there any obstruction? Because the filter can't the way in which you apply the filter be generalized to Hamiltonian as the per-genetic and non-diagonal chemistry, whatever. Maybe if you explain how you applied the filter... Also, yeah, we applied the filter just by using the... The filter was applied over the values of the eigenstates themselves. So even in a more quantum, like a proper quantum Hamiltonian, like as long as we're optimizing just for increasing the probability of the ground state, which is the lowest eigenstate, it's definitely possible to apply filters and do that. Which is the whole prop point of quantum Hamiltonians. So for classical Hamiltonians, maybe it's much easier to apply these filters. Perhaps. So obviously what we did here is the cost function we used was from the max cut problem, which is a classical computing problem. So the data sets we generated as a result of that. Right, and this is why you can apply your classical function directly in the cost function. Because you know the energies. And you know what the eigenstates are. And you are applying it on the energies directly. Right. Okay. I see. That clears stuff up there. Thanks. Thank you very much. And that closes the hackathon, I think. I think upstairs there. Yeah! I mean, I'm pretty sure not everyone is here, but one piece of info at least to start with. There's no lunch here, as was said this morning. The lunch will be at the cocktail place. The shuttles start at 1.15 for the first shuttle and 1.45 for the second one departing from here. Yeah, so you've got time to chill and to starve for a bit.