 And he's also taught a bit about Kermit, which is the software package we developed for actually implementing these themes of interest. Well, I'm going to do the follow-ups that, which I'm actually going to show you all how to actually use Kermit, or at least the kind of basic introduction. Now, this is going to be a demonstration as opposed to a kind of hands-on tutorial, but you'll see here, I've got a link that I will also put in the suitable Slack channel on the Slack account you've got for this conference. Which has the resources that Dan showed you this morning and that I'm about to show you. So you'll be able to access these notebooks after the talk to get like a better, more hands-on idea of what I'm about to talk about. There will also be an element of pie ticket to this. So hopefully, Yosie and Catherine and Callum set you all up suitably yesterday so that all will make some more sense. Okay, yeah. So technically this is part two of our air investigation morning and that Dan has already introduced you all to noise error mitigation and what Kermit is an overview as a paper we worked on and also as a software package. So like I said, we'll be going through introduction and getting started with Kermit. The kind of out-of-the-box error mitigation methods that it supports and that you should hope to be able to use really easily. We will cover at the end the slightly more advanced use of Kermit, which is the ability to combine schemes in quite a straightforward way, which was really, it's a selling point when we were designing the whole software kits. And then I'm thinking we will not make it to the development of newer mitigation schemes today, but I decided to leave it in this notebook so that if you do want to have a look afterwards, you'll be able to see how we actually create schemes in the first place. Also, I should mention now that if you have any questions, do just interrupt me with the go and I'll try to answer them as well as I can. Now, I'm sure Dan has just talked all about this, so you don't need me to remind you, but people often kind of classify the era we exist in now in terms of quantum computing as being the NISC area, which for our purposes, I'm looking at error mitigation, kind of set a couple of things. So the first thing is that the devices we're interested in defined as having a small number of qubits, even knowing that there's really great problems on the hardware side, it's devices getting larger every year and the noise rates are going down, and it's really brilliant. We're still not in a position where in the near term, we're expecting to be able to do some kind of error correction as I say on the second bullet point here or implement some of the kind of really good use case algorithms that motivate a lot of the work in quantum computing, such as those that kind of work and breaking current encryption protocols and Brova search algorithm. And so error mitigation is kind of posed as a near term technique for trying to get essentially better results when we run things on hardware, running better experiments. And we tend to define them as essentially attempting to do this trade-off between running more circuits or running more shots on my hardware at the cost of them being able to reduce the noise. So it's a more expensive experiment you're running but if you've quantified the experiment properly and you've quantified the error mitigation properly, even though we're spending a bit more time with the hardware, which might be a bit more expensive, actually the payoff is that it will reduce noise in a way that's really beneficial to experiments we're running. And the other thing to mention is that typically the increased number of circuits and the increased number of shots and running schemes is the kind of trade-off and that the changes in terms of the circuit size itself, say whether I want to add more cubicles to be able to characterize some aspects of the noise tends to be very modest and especially in comparison to something like a quantum error comparison technique where you tend to need many, many physical cubicles to be able to encode a single logical qubit, which is something I'm sure my colleagues Ben Kruger and Kieran Ryan Anderson will introduce you to in far greater detail this afternoon. So I can leave that there. Okay then and then just, there's our background device that noisy, so noisy that we can't do quantum error correction but not, you know, they're good enough that we can maybe do some things. And so we like to then, we swoop in and we say, oh, here's our open source Python package for Kermit which stands for quantum error mitigation and any similarities with the frog is coincidental. It was our open source Python package for designing and executing digital error mitigation schemes like the type Dan does introduce you to. And when we say execute, we've been kind of automatically if it's all set up right, it should be really easy to just hit run and then go get a coffee or go do something else. So when you come back, not only is your experiment run but it's been run with a bunch of error mitigation and, you know, there's a few setting points to Kermit. What is that it's implemented using a kit which you should all be somewhat familiar with now which lets us aid build on top of the great work that's been done in PyTicket. It also gives it a couple of more generic features. So it's platform agnostic, you'll see later that I'll be using PyTicket backend objects to run various experiments with Kermit and to use those backends is essentially interchangeable. So I'm going to be using some backends which we've built on top of the Kiskit software platform. This is just for longevity and smoke books. They're all open source and really easy to access but with what I'm looking at, you could easily exchange them for a backup which runs on our continuum hardware and it should just work straight out of the box. So this makes it platform agnostic to the hardware you're interested in as long as the error mitigation method is suitable for hardware. And it also means it's kind of easier to work with other software development kits. To save for some reason, your preferred software development kit is Kiskit and not PyTicket. That's fine, we don't mind. You can develop your circuits and Kiskit convert into PyTicket with our converters of NGC's curve here. So that doesn't have to be a barrier to running better experiments. And then so then the final thing is that Kermit has a kind of common interface to generating these kind of schemes or running them. And in terms of the methods we have kind of out the box that you could do, well, I believe Dan has just introduced you more to zero noise extrapolation, different data progression and probabilistic error cancellation which I'm hoping he's also characterized as being schemes which mitigate for errors in expectation value calculations. So which is the kind of we typically think is being experimentally related to Hamiltonian simulation. This is the kind of the kind of the canvas might be interested in. And we also have out the box error mitigation for a technique called frame randomization or randomized combination. And also through a fracture technique that works for state preparation and measurement errors. The point being that there's, you know, there's a bunch of out the box things here you don't have to be an error mitigation expert at all to be able to run experiments and apply some error mitigation and see what happens. And then we're going to get a bit more hands on with this. But it was a very quick introduction to the design of Kermit. Essentially what it does is it represents experiments on quantum computers and experiments on quantum computers with additional error mitigation as data photographs. And the diversity of the data photographs are the kind of things you might do in a typical experiment. So, you know, there'll be a vertex in the graph which has a particle quantum circuit and it gives it to a back end. And that back end, you know, if you're running on actual hardware, interfaces with some API and send your circuit over the cloud to the hardware and gets it all set up to run for you. There'll be a, there'll be a vertex that does that as an example. The other, and then I should also say then that edges between these vertices kind of define the flow of information from the start of the experiments in the back view experiment. And then maybe the other interesting thing to mention is that we don't store these actual data photographs in memory when you use the Kermit package. We store essentially generator functions which hold blueprints to create them. And that's what allows us to be kind of really flexible in how you run the experiments. That's what allows us to use different back ends when we run experiments. And that's what allows us to combine schemes. And then finally, before we actually start looking at some code then, it is open source. It's available on PIP, just PIP and store Kermit. We've managed to snag the URL Kerm.it. So if you go there, it should just redirect you to the documentation. And then similarly, you can find the GitHub repository for the code itself in the SECL organization. Okay then, on to hopefully some actual code. Well, first of all, we find the kind of error mitigation methods that we're interested in and that people are able to use in Kermit into kind of one of two types. One is this one we call MITREZ. And the one is this other one called MITREX. MITREZ experiments refer to any error mitigation method we've implemented that is designed to modify the distribution of shots retrieved from the backend. So we've been kind of saying, but just to make this even more clear, a typical workflow for a kind of scientist running something on a quantum computer is you have some Python package, like I say, quite a bit. You use their circuit generation to create a quantum circuit that you want to actually run on some hardware. And then you, well, with this, you use our quite a bit back ends for the different hardware. But abstractly, there is some API which you give your circuit to and then that sends that over to the hardware providers. And on the side of the hardware providers, they have given your quantum circuit and this instructs their actual quantum computer to initialize all the qubits in this zero state, run a transpiled version of the quantum circuit down to the specific hardware instructions. So, you know, the size of the pulse is being run, let's say, or maybe the physical transport are some of the actual physical qubits themselves. And then at the end, they measure them all, typically in the ZBasey, jets them into a set of Z eigenstates which correspond to zero one eigenvalues, which you get back as a shot. And so, Mitt-Rez captures error mitigation methods that work within this kind of environment. So, you have a circuit and you get shots back and Mitt-Rez will run that for you automatically, but it might also run that for you while also applying error mitigation. But from the outside perspective, as a programmer working on it, you send circuits, you get shot. So, if those shots are better, that's great. Mitt-XM refers to this other type we'll be looking at later, but refers to experiments where you'll typically interest in the expectation value as the estimator of some kind of observable of interest and there's modifications on that. We'll talk a little bit more about that later because we'll start with Mitt-Rez. And in terms of how we'll kind of consider what each one does, we'll be looking at how we might implement what Kermit does, it's just like in the raw Python code, we'll then see the equivalent of how it's done in Kermit. We'll have a look at how it might perform with or without errors and then we'll apply an error mitigation technique out of the box to try and improve the results we're getting. Okay then, so if we're going to do any of this, we need a solid candidate circuit to try and show an improvement results in. Now, I'm starting wasn't watching yesterday, but I would be astonished if at some point nobody has shown you a bell pair circuit or drawn a bell pair circuit on a platform somewhere. Like I'm going to assume that you kind of know what a bell pair circuit looks like and it was probably also seen how to generate it, you can pie ticket. And so that's what we've got here. We have, I'm assuming you'll know pie ticket now that we can import a circuit object from pie ticket. We can create a circuit with two qubits and two bits and we can use the indexing to apply on some gates that are required to make circuits. We've got Hadamard and we've got a control X gate and a measure and you've probably all now seen this kind of circuit diagram now. So we can have a look at what it looks like. And this is the circuit we're going to be kind of considering for the purposes of whether our investigation can be helpful or not. And so I guess the really important bit here is to remember that if we've generated a bell pair and there's been no noise, we expect the state we constructs to be in a equal distribution over the zero zero state and the one one state. And if we're not getting that, there's something that's gone wrong. Okay. So how would we run a bell pair just using pie ticket? We don't even know what time it is. We've just started using pie ticket. Well, what are we doing? How do we do this? Well, hopefully you've also been introduced yesterday to the selected of backends we provide on pie ticket, which are a standard set of objects running experiments with different hardware and simulators. So what I can do is from the Kitsch gate extension for pie ticket, I can import an airbacking object. Like this is just going to be a noisy simulator. So when I pass it circuits, it's going to do a noisy simulator of the gates in that circuit, the unitary that that circuit finds and then use that process to generate a bunch of shots equivalent to the number I've asked for. And so, okay. So what I create my backend object here and then I use the run circuit method. I pass it a circuit. This is the bell pair circuit we define. I pass it a number of shots I want to take, which is 100,000 here. And then I use some sneaky behind the walls code to do the positive of the counts for me. It's a bit easier for everyone to see. And so, you know, I get my counts object back here, the distribution over the shots that I've got from the simulator running the consul circuit of interest. And I get something like this. And so from our understanding of what a bell circuit looks like and what a bell pair is, this is, you know, this is about right. We can fairly well trust that this has been a noisest process. So this is because we've got approximately an even distribution of 00 and 11 states. And this is what we expect to see. And this is, you know, this is fairly simple. If we take a step back, if we look at the bell pair generating code, it's in high ticket. And we look at the code to actually run the, run it through a back end and get some results back. This is all fairly simple. So Mitrez will also do this for us as we're about to see. But, you know, if you're only interested in this, it's not doing much here. We'll have a look anyway though. So as I said, Mitrez is the set of experiments that Kermit defines that you might want to run where you have a circuit and you want shots. And it runs that process for you. We can import it from Kermit unsurprisingly. So, you know, it's a top level import Mitrez. And we can define it by a back end object. So this ideal back end is this air back end we use just a moment ago to do the simulation for regular pie ticket. And this produces our new object and ideal Mitrez, which is hopefully going to do some helpful things for us. How do we run it to do the exact same experiment we just did then? Well, we have to define our experiment as an input. And so the Mitrez, each experiment is defined as essentially a pair, but we wrap it into a named tuple. So it's a circuit shots object, but this is defined by two things. The quantum circuit we want to run, and this circuit is the bell pair circuit we just looked at. And the number of shots we would like to receive of that circuit. And the curve that we have, because it can run a bunch of experiments for you in parallel, we have to wrap this into a list because this is the top level at a single thing. I hope that's not, I hope that's fine. So this is our, you know, all the experiments defined. We've got a list of a single experiment and the Mitrez object we've just generated, which is running through the noises back end we just used, has a dot run method, which is a local runtime for retrieving the results we want. And when we look at the representation in the second, I'll explain a bit more about how that runtime works. We run this though and we get a result list. Now, if you use a normal pie ticket back end, you pass it a circuit and the number of shots, and you get a back end result object back, which hopefully you introduced yesterday, maybe unsurprisingly, then the result we get back here is a list of these back end result objects. So it's, you know, it's the same kind of data. And then if we get our counts back and we plot them, we see that for the noises simulator, okay, we had to run the experiment in a slightly different way, and we had to use a whole different package to do it, just to run the same thing. So maybe that's not that appealing as like a base level thing. But if we then plot the results we get, you know, it's about an equal distribution of 0 to 0, 1, 1, and that's good. That's what we want to see. So Kermit isn't doing something buddy under the hood at this point. It's just taking the circuit, running it through the back end, giving you some shots back. And then to kind of show you what this looks like, I realized sadly just before this talk that my HTML hadn't rendered properly, so we're going to have to switch to the notebook itself to see what this is. But as I mentioned, Kermit stores the experiments it runs, and hopefully eventually the error miscages Kermit as it runs as dataflow graphs, which are generated on demand. And so if we, you know, we just talked about running our circuit through a back end via a Kermit mit res object. Well, what was that mit res object doing? There was a helper method for the class object, we'll get task graph, which will give you a visualization of what happens. And so the process we did looks at something like this. Well, okay, well, let's take each bit by a bit. It's a dataflow graph, so diversities define kind of generic functions you might want to do for running something on a quantum computer. And that's what these big green boxes are going to represent if you look at these. So one of them says circuits to handles. And this stage is getting the quantum circuits and passing them to a back end to get an object called a handle which we can later use to retrieve the result. And hopefully you were introduced to this kind of process yesterday. And then handles to results is using these handles to go back to the same back end and then say, hey, I want my result, this handle, and then it gets back that result. And so the whole process we've done when we call dot run is we pass our circuits into the inputs of a number of shots. This gets passed to a task which they're passing to a back end to be run. This passes those kind of unique identifiers that experiments back to the back end. I said, where are my results? And then they get passed back. So, you know, this is a really basic example experiment it's done for us, but it has done it for us automatically. And this is kind of another quick description of what I just said because there's data graphs that they're in a Python class we wrote called task graph. I don't think that's too important. I mentioned that diversity is our functions reducing of help. Now in practice they're not just functions, they're actually a Python class we wrote called omit task. This is only so we can add additional attributes to the functions essentially. So, that class knows the function that's running on the input data. It also knows the number of in edges it needs, the number of out edges it's got a name. It's just additional information for either views essentially. The edges of the graph move the data between the omit task objects. Something I should have mentioned that I hope is fairly clear is that data moves from top to bottom on these graphs. So, we get them inputs to outputs. And then, so the final thing is at the moment Kermit unfortunately only has a kind of basic local runtime. So, we define these graphs because it kind of gives additional granularity to the processes we're running and it means that if you've got a really good runtime you can run the various things in parallel which is really great when you're doing stuff on quantum computers because often when you run sort of quantum computer I don't know if it was told you this but you often spend a lot of time waiting for the quantum computer to run because lots of people want to use them. So, being able to paralyze that kind of process is really helpful. Currently the runtime is just local. It does a topological sort on the data graph and then just runs to the task sequentially. So, you know you're not saving time as much there but it is still doing it automatically. Okay, quick quick aside into you know Kermit fundamentals we'll go back to the actual code bit. So, we've just seen some noiseless simulations of a bell pair and we've kind of listed the results and gone yeah that that makes sense it's about even distribution of the two states that it's meant to see. And so, you know if all our computations were noiseless we wouldn't need error mitigation. But ultimately Kermit exists to do error mitigation which suggests that maybe sometimes quantum computers are doing things we don't like and we need to try and you know account for it or correct for it. And so, let's kind of create a scenario in which we do add some noise there and then we can see how we can improve on the results. And so, we're going to do this by creating another error back in object like the one we have before but we're going to pass it a noise model. Now, I'm passing it a depolarizing noise model. This is going to have errors every time I run a single cubit gate or a two cubit gate with some probability it's going to slightly change the unitary that is implemented and it's also going to add something called a readout noise. So, when I measure my qubits it's going to probabilistically decide to get the wrong result. This is an artificial noise model this is something I'm asking for it to do. It is a handy tool though if you're trying to work out the performance of techniques you're working on or maybe the performance of some kind of circuit compilation you're doing. Decode for this noise model I'm not going to show here but if you access the notebook you're going to see that there's some hidden cells and one of these hidden cells has it. So, you'll be able to you know if you're really interested you can go have a look through the notebook yourself afterwards and see how we create that noise model. For a top level explanation the point is instead of our back end being noise this is everything being perfect now when we run stuff through it we're going to get some noise. So, we can't be run a very similar experiment before. We construct a new mitres object and mitres object where the back end is for the noisy simulator and not the noisy simulator. We do this via a function for gen compile mitres so we're going to start getting into the world of slightly more complicated mitres data graphs. This one in fact though is very similar to the one we looked at before where it can send circuits to a back end to get results which it then returns to you. The only difference is that now before it sends a circuit to the back end it's going to perform a basic pie ticket compilation on the circuit just to make sure that the back end is able to run the circuit we provide. And the reason we need to do that in this case is because when we define a noise model we'll say something like oh for the Hadamard gate with some probability run a slightly different process and so our circuit needs to be defined in terms of the gates we've defined the noise model for. I don't mean this is also true in general but if you're running on hardware you'll need to make sure your circuit is rebased to the primitives that that hardware actually supports which is something pie ticket is automatically for you if you ask it very kindly and there's also something that Callum I'm sure has told you all by yesterday. Okay so the main point being we've now got a new mid-res object where when we run through stuff through it it's going to do a noisy simulation and the results aren't that good. Even this new object though the interface of running it is the same you know we've got our list of circuit shots that we generated earlier that runs a belt per circuit 100,000 times and we pass this through to the run argument of our noisy mid-res. To get some noisy results back and then we have a look at what the results are saying well kind of a good thing because it means we've defined our noisy simulator correctly by finding that actually before we just got 0011 states you know our belt per simulation was done perfectly the difference in the distribution was just due to kind of sampling noise as opposed to an actual issue with the quantum computer or the simulation we're running. Here though we've got some 01 and 10 states that we know are not a valid result to this it's the kind of quantum state we've tried to produce. So this is now the kind of area where we're thinking well how do I how do I improve on this? Maybe you know my device is too small and too noisy for me to run quantum error correction maybe there's another technique maybe quantum error mitigation will help now. So let's just use an out-the-box error mitigation method to see if it works for this noisy simulation we've done. So as I mentioned earlier for the mid-res kind of criteria we have two types of error mitigation supported out-the-box one is something called randomised combination we'll not touch on today but the other is the spam errors. Spam is a nice way of referring to state preparation of measurement errors i.e as I mentioned earlier you you know the way you run experiments is you define a quantum circuit and you send it off to the hardware people and what they'll do is they'll initialise their set of physical qubits in the zero state and then they'll run the operations you define and then they'll measure more in the ZBasey and so state preparation refers to errors in the initialisation you're supposed to construct an all zero state but there is a chance that there is noise in that process so you don't quite do that and also when you uh I'm sorry and the the measurement obviously refers to the bit where you measure of C in the ZBasey and maybe there's a bit of noise there and it doesn't quite return the result to your specs uh so this this permanent spam module has a few options we're going to use the uncorrelated option for spam correction when we run these experiments uncorrelated in this sense essentially means that you make the assumption about the noise profile of your device that each individual physical qubit its error in state preparation and measurement is independent of all the ones around it so we can model them all separately this isn't necessarily true in practice it might be that if you measure a single physical qubit that it gives the the wrong result because of some like over citation and then this affects the results around it so you might have to kind of gather more information but if you're going to see that all uncorrelated then you can gather quite small amount of information and actually in particular a scalable amount of information to be able to correct for the results you get so we use this in-built generator function which follows a blueprint to create a data photograph that will implement spam mitigation automatically we pass it our noisy back end so it's running everything through our noisy back end which is what we're interested in now and we also pass it a number of shots this number of shots refers to how many times we run the calibration circuits spam error mitigation to uh get the kind of parameterization information we desire the more shots you pass it in general the more precise it's going to be there is you know there's there's a point where if you pass more shots there's a make a difference we won't be talking about that kind of analysis today once we have the object though running it is the same as any mit res you just call dot run you pass it your list of circuits and shots and you know we can see here that in the the artificial example I've created for trying to convince everyone here that error mitigation can be useful low and behold we're finding that actually you know we've got the noise results we saw a second ago here and we've now got some results where we applied spam mitigation on a noise model so you know it is fairly artificial but we'll see that it's doing something and we can kind of we can visually see that we've got fewer zero one on one zero states and those are the states we don't want to see so that's probably it's probably doing something better and the important takeaway for where this is a kind of a term it hopefully doing something helpful is that I've not had to really explain how spam error mitigation works so in particular I'll just explain how the spam error mitigation we've implemented works in terms of an interface it's creating a different mit res object and running it as before and now our results are better so you know in an industry you need specialization in different areas you don't need to know about error mitigation to get good error mitigated results and then finally before we move on to the mit ex which is the other types of experiments will have a very quick look at the kind of task graph it generates so as I said when we call the generator function it's going to automatically create a base for graph representing the error mitigation experiment we want to run and so we can lift it here with the same kind of .get task graph method you can run this in any of them and this is a you know a definition of the overall spam correcting process we've just run automatically so these are all the tasks that are actually happening under the hood we can very quickly talk through it I think the most actually the most important bit is to notice these two tasks down here these are equivalent to the mit res tasks we saw earlier so we can see that we're still running stuff through hardware and so actually how this graph is generated is it takes a mit res object like this and then it's got what the blueprint does is it defines how we add the other tasks around this which tasks we append which tasks we prepend which tasks we add in parallel and so at a very high level we can see you know our experiment circuit shots come in through this input this task here works out all the characterization circuits we need to be able to do our spam correction later and generates them then the actual experiment circuits are shot down this left-hand side of the graph which essentially just runs them all through the device as if with a normal experiment this right-hand side of the graph runs the characterization experiments from my spam error mitigation through an actual through the same device to be able to generate data so we've got two points where we're running through a device and then I've got a final task which takes my experiment data just running my circuits through the the back end as normal essentially and it takes my characterization data I've got from my spam mitigation and it combines them together and it returns a distribution which has been corrected and so it does all this automatically for you which is nice okay that is approximately part one of this talk then so we've looked at you know broadly what Kermit does and we've kind of we've built up from how we might run experiment by ticket how we might run experiment mit res how we can add a noisy simulation to our experiments if we know we're testing things locally and then finally how we might be able to use the kind of error mitigation scheme provided by Kermit to get better results but as I mentioned when we were first looking at error mitigation schemes in the literature we found that you know people people try to apply error mitigation wherever they can when they run experiments because the general it can improve things and one point where people tend to be defining error mitigation experiments to work on was the experiments where they were trying to essentially get expectation values and they were defining error mitigation schemes which were affecting the result of the expectation value so that means not the actual shots that are coming back they get the shots as normal and they process them but the quantity returned from that processing they then error mitigate on and so we have a few schemes and Kermit that automatically apply this kind of experiment and to kind of show this off we've got about 20 minutes it should be fine we're going to see how you would do this a normal ticket and then how you do it in Kermit we'll then look at it with and without noise they're kind of very similar to what we just did and then we'll apply a scheme and see if it improves things and spoiler is going to improve things because we wrote the demonstration to show that okay so let's start off with trying to define an experiment which we can kind of train improvements for for our purposes here we're just going to create a random circuit comprised of a section of two cubit unitary matrices defining gates if you're interested in how we define this random circuit once again the code will be hidden there's no but somewhere so you can go work it out and we can see a visualization for our purposes though the point is what is the ideal expectation by this random circuit I've generated is going to give me now this is how you might run such a statement by ticket I'm also hoping that Canon was kind of introduced his kind of idea yesterday but we'll go through some of the types very quickly okay we start off by copying our ideal circuit so we want a funny business going down and then for our ideal back end which is this air back end of it we created earlier without any noise we call the get compiled circuit method to produce a circuit which is compiled this is hopefully not a surprising step because general quantum simulations don't accept unitary two cubit box as an input gate that's not something they know they don't know what that is so we have to turn that into a sequence of gates which it doesn't know what it is that's what we're doing in the first state then we need to define the observable we want to measure now in this case we're saying we want to get the all z observable which essentially corresponds to projecting my output my quantum state into z eigenstate for all of my cubits and that is what this thing is defining here there is a object in pi ticket under the Pauly class if you're not sure about Pauly makes these this afternoon in the quantum error correction section you are going to become instrumentally aware of what they are so you can hold on to that we can create a cubic Pauly string and so this essentially says for all of my my cubits that's what this list is generating here it's saying I want as dead as dead term in my measurement and then that's what these Pauly deads are saying so if you know about quantum chemistry and you've run these kind of experiments before I hope this is kind of making some sense and then in normal pi ticket our backend has got a helper method called get Pauly expectation value I have my random circuit we just looked at I have my cubic Pauly string which says I need the all z term please and it gets you a result and if you don't know much about experiments we're getting expectation values that's totally fine the thing you need to take away from this is that the ideal expectation value is 0.55496 et cetera et cetera et cetera this is our target value this is when we've got no noise for our small experiment this is what we should be getting and so once we start adding noise and we don't get this we need to try and recover this value again essentially okay well this is how we did it with normal pi ticket doesn't get any easier with kermit and the answer is I think a face value maybe not but the point is that kermit could do more things if you get used to this interface suddenly the whole world is already stepping but let's go basics and build it up the first thing is like similarly to how we were generating these mit res objects for running error mitigator schemes which affected our distribution of shots we can import a mit ex object we can define it for the same noise as back end this air simulator and with our mit ex object class initialization to get our mit ex object so we've now got this thing here ideal mit ex and if we can pass it the right thing it's going to automatically run our experiment for us and that's hopefully really helpful however the definition for a mit experiment is a bit more complex because it has to hold on to more information something I should mention is that actually these experiments aren't quite the same in a typical quantum kermit experiment say you would be wanting to measure multiple cubic power streams it's not the same one and so that's what this cubic power the operator object here will hold for us it holds a dictionary between power the term to be want to measure i.e. cubic power the strings and their corresponding coefficients the coefficients is normally something which are physically motivated by say the the Hamiltonian of choice for your experiment so we don't through this what we're talking about here we don't need to worry about these coefficients we just need to know that there's an object that can hold them so this is saying you know as before I want to find the all zed terms expectation value for this experiment I'm running and then from mittex we wrap each of our individual experiments into this class called observable experiment which you know if you remember before we had this circuit shot thing it was just a main tuple but it was designed to hold everything to define that kind of experiment well this is analogously the same it's a it's a main tuple but it's designed to hold everything I need for my experiment so I've got my answer circuit which is my random circuit in number of shots I want to take of that for my you know the experiment I'm running and also in spy ticket we support symbolic compilation so often you might want this to have a set of symbols in it that you want to explore and we pass that through this object here we don't need to worry about that right now the state we've got a circuit that we're doing our expectation values odd and the number of shots we want to take and then we've got you know another level of abstraction to hold our qubit parity strings that's put our qubit parity things into our operator and now we're going to have to put this operating to this new object called an observable tracker and it's going to essentially stack all the computations we do and work out to they knit the results back at the end so for now I just think you know this just holds my qubit parity operator it just holds the terms I want to measure but this whole thing defines an individual experiment and we can pass this experiment through in a list to our ideal mitx object so the one with the noise simulation if you can get your head around what the input arguments look like then well then you're simply just calling run and you're getting your ideal expectation value back and the results kind of come back a bit like this this will be a dictionary between the the terms we wanted to measure so in that qubit parity string we said that each of the qubits wants to measure the z term and that's kind of what each of these things are saying I think said for qubit zero said for qubit one and I say this term gets a value of 0.55656 and a bunch of zeros and we'll see you know it's a density matrix simulator so there's going to be some sampling noise which is by the expectation values differ slightly but we can see they are approximately the same so they are they're doing basically the same thing and that's a that's good so our baseline is doing an experiment through normal pie ticket or doing an experiment through our current mitx we're going to get the same result pretty much and that's a good thing so now we need to work out how we can make our result worse so we can then show how to recover our result he says but actually first of all we're going to very do we're going to do is a very quick look at the task graph for a mitx object as we saw for the mitreads and we saw span mitreads what does this do then well we'll get really abstracted for this because we're slightly short in time the most important thing to notice is this little word mitreads here all these experiments at some point are being run through a hardware which means they're being run through a mitreads object because we build our graphs around it now there are other things happening as well essentially what the task to do beforehand is for the terms you want to measure and your ansat circuit it works out all the measurement circuits they actually need to run on my hardware to receive the results they want and then these results afterwards do the post-process when the result's for you so if you know it's your experience quantum chemist to get these kind of problems you'll know how the process is shot to get an expectation value well it just does that automatically for you yeah and so we can pass our noisy simulator back in from earlier the one with that deeper noise model that I showed for the mitreads case we can just pass that through to our mitrex to get a noisy version of it and then we've defined our experiment now so we can just pass that through the run function here so students that same experiment before and for the artificial noise model we've created to try and then you know improve by finding that the value we get is 0.26920000 etc and this is not 0.55 whatever it was and there's quite a bit away from that and so our metric of interest here is how to make this number closer to the number we want it to be and we are going to try and do that with an out-of-the-box error mitigation method in Kermit and this one is going to be zero noise extrapolation which I believe Dan has just introduced you to but to maybe try and give a very quick refresher on what it is as we're seeing here we have for a certain amount of noise which is whatever this natural noise is we get a certain value which is 0.26 okay and this is essentially like a single data plate how zero noise extrapolation works is it tries to artificially increase the noise experienced by the device so you know this is noise value one just whatever the device naturally has if we can artificially inflate it so the amount of noise experience is double or triple we can get new data points 0.26 down to 0.15 etc etc and then we can use that back as a extrapolate with some kind of fit to the case where we have zero noise and so this this generator function is going to generate a mid-x object even that is going to do all of this for you so you don't really have to know what's going on you just say you know I want better results maybe this will do it and hopefully it does so maybe unsurprisingly then there's a modeling kermit called zero noise extrapolation where we can import a mid-x generating function by the same kind of blueprint scheme I talked about before which will generate a data flow graph which will automatically run this and also has a couple of extra keyword arguments because we can build like the parameterization of how we are going to use the error investigation method let's talk about this in a second so we have our generating function called we pass it our back end of choice so this noisy back end we're using so we're trying to improve on the results we'll pass that one in our noise scaling list so these are the values of which we want to artificially inflate our noise so we're saying okay when we run our experiment normally it's noise level one let's get points for noise level three five seven and nine and then try and do our back as a extrapolation the folding type will do this one first is essentially the manner in which we choose to artificially attempt to inflate this noise so when we do dot circuit essentially what happens is well okay our whole circuit is a unitary process so if we've got noise level three if we run that unitary once that's noise level one if we then run the dagger of that unitary that's noise level two but the actual process we've done is just the identities that's not helpful if we then run our whole unitary again for our whole circuit we've run three times as many gates and we've done the exact same kind of unitary on our state of qubits so we can say oh that's a bit like if I had three times as much noise and running my process so that can be my data point and so for five seven nine we just run the same circuit let's dagger more and more times and then for our fit it is the way we fit these data points on our kind of like space back to the point where it's got zero noise which we'll see in a second because generating the mith's object itself is very straightforward and now if we run it with the same experiment we defined before which you know defining it seems maybe a little bit complex but once we have it we can just pass it through all these different mith's objects that will and will just do it for us and we can see that you know we've got our noise for the noise level one here we've got our expectation value and it's a value in a 0.27 so approximately what we saw a second ago and then as we increase the noise we can see the expectation value is getting further and further away from the 0.55 value we wanted but when we do this fit back was we're getting we're predicting that in the zero noise case we'd have got a value of 0.365 now is that accurate no if 0.55 was our target value to this you know which five example I'm showing everybody here but it's closer and in a second we're going to see a trick that gets us just over the line to get essentially the right value before we do that though we can have a very quick look at the task graph for this guy and I hope you're not too bored of these green grass yet I guess what I'm really trying to show is that you know the error mitigation methods have shown you that getting more and more complex to implement yourself so it can be handy to have something that's just going to automatically generate to see you do it this one's I think artificially quite easy to understand what's going on though essentially we have this you know the circuits come out normal we could power them for our back end and then we have this duplicate box essentially what this does is it defines new experiments for each of the artificial noise levels we're working with so we had you know artificial noise level one here three here five here seven here nine here each of these run through their own separate experiments which is what each of these are going to edge to going off the show and then at the end we have a task that collects all these different experiments works out the results and then we pass things out at the end again point being is it's doing a lot of stuff in a state photograph under the hood but for us the user we just freighted the objects and we've ran it and we got some more results back this is very just quickly showing that you can use different types of folding and fits for your results so depending on what experiment you're doing you might find that a different fit type or a different folding type of the hardware you're working on outperforms okay so we've done you know what's Kermit we've looked at running experiments where I've just got circuit I want some shots back and ways to error mistake that we've looked at experiments where I want to get an expectation value and we've looked at ways of mitigating that with Kermit what we're going to do to finish off is we're going to put these two ingredients together and we're going to run them together and we're hopefully going to get a nice expectation value for this noisy case we just looked at and this is going to be this is a two-minute job here because it's really straightforward and Kermit and it's one of the design principles for me so initially wrote the code which is you know okay we just looked at Kermit and Kermit before and we were kind of talking through the arguments we pass it our noisy back and object and this is this one we were just talking about it's the depolarizing noise model with the kiscuit simulator we've got our noise scale in this and we're saying our special increase at the 3579 and then we're saying use an exponential fit to estimate what the zero noise case would be we've also got this other keyword argument that's going to show us the kind of graph is the graph representation of that exponential fitting but we're finally going to use a new keyword argument to improve the computation we're doing and that is experiment mitres what this does is it defines the mitres object each of our instances of zero noise extrapolation are going to be going through or to make it a bit clearer this is the ZNE graph we just looked at and we said that it splits in the five different experiments for each of the noise level and runs them independently well we can see over here that each one of them has to go through some kind of mitres object when it does this you know at some point we've got to run our circuits on hardware and get some results back so when we pass this spam mitres object here which is the one we defined earlier to get better results when we're working with mitres essentially what we do is this mitres object here gets replaced with the one at the spam location I probably have a graph for it somewhere here I do we'll see there was it said mitres the second to go on the graph around here well now it says you know spam things you see the words when you say it's saying spam the point being that for you the user getting this new date flow graph that does both zero noise extrapolation and spam mitigation is essentially requires adding a new keyword argument to my definition and so now this is going to do both for us and it's defining the date flow graph doing that we just have a quick look at that so we'll skip past this and so now if we run our experiment it's this noisy it's this noisy simulator when we first ran it we got this expectation value of 0.55 and that was our target value for the ideal case when we ran it in the noisy case about any error mitigation we got 0.25 so we're really far away well actually now by applying spam mitigation all the individual circuits being run and then applying zero noise extrapolation on the expectation very retrieve for all of them we find that we're able to get a value of 0.5, 6, 9, 0 blah blah blah i.e. a value that is a lot closer to the 0.55 ish that we were looking for you know it's arguably done a small overfit and error mitigation you know often you're going to end up suffering in this way where things are just like really close but they're not quite there but in the grand scheme of things it's done a good job arguably of accounting for the depolarizing noise model we just looked at and I think I'm going to have to leave us all there for today because we're coming up to the top of the hour and this would certainly take longer than 10 minutes but what I will say is that you know we've looked at you know how to use mit res per circuit it's fine I just want to run so it gets shots back we've looked at how to use error mitigated mit res we've looked at mit ex we've looked at error mitigated mit ex and we've had one look at how to combine the schemes well I should maybe mention then that in the documentation it tries to make it quite clear how you can pass these kind of keyword arguments between all of the different error mitigation schemes we define so you can combine them in any way you want really some ways are more sensible than the others so be careful I'm sorry I also have the the conclusions here so yeah instruction in case I have a kermit out the box error mitigated advanced use of kermit to combine schemes to get better results and if you guys are interested the notebook in this repository here talks a little bit about how you can define your own experiments and show some results through a method for spectral filtering so it goes through how to make this kind of data photograph yourself uh this is the last three houses a column yesterday and Catherine usually would have talked about the using pie ticket Dan has talked to you about noise narrownrmitigation and I've talked a bit about kermit and I think that's all I'm going to talk about now so I'm happy to take any questions people might have etc thanks a lot for listening okay thank you for a very nice talk Silas questions got some hands up at least three so let's see hi thanks for the nice talk so just in the beginning you showed when you initialized the mid-race object so calling the mid-race class you passed the air backend I was just wondering on the compatibility of different backends like what you can use and if there's news hardware things that come out how you make everything compatible great question so also I say great crash party because it just falls into my hands nicely so I really appreciate that so in PyTec again hopefully this issues yesterday the backends we define they're essentially standardized so they have the same access to functions to do things with them if you want to run a circuit it's always process circuit if you want to get results it's always get results if you want to compile it's always compile circuit etc and so this means when we're defining in mid-race object like so well if I wanted to actually learning I can't edit the I could probably edit it can I edit it here live demo should we find out no I can't but if I wanted to though here I could instead of importing this backend here from Qiskit what an example is if I wanted to run this on actual IBM hardware there was another PyTec a backend object called an IBM Q backend I could create an IBM Q backend object instead to replace this one and then I could simply just pass that object through to the mid-race generator and now if I ran it it would run almost experiments through the hardware and not through my simulator i.e. the point being that changing the hardware you want to run on is as simple as just changing the backend object you pass now in terms of what ramifications that has for the actual error mitigation method that is quite a sensitive topic in the sense that each individual hardware tends to have very specific noise models that are also very very hard to characterize so when people work in error mitigation they tend to do things like say oh my you know as all my bits of physical hardware moving all my humans around are doing measurements that there will be noise coming from all kinds of sources that are going to preserve my computations and capacity but approximately I can probably say that noise is going to be about a depolarizing noise channel i.e. all my errors will kind of manifest as just being essentially as if additional power gates are added to my computation under over some kind of distribution of that principle to define the schemes okay cool thanks and maybe on that second point is there a lot of automated methods to discover the noise model okay well okay so I could maybe use that to explain why we decided to make this composition the first place and this is because there's a method I haven't talked about here called randomized compilation randomized compilation is a method which attempts to tailor the form of the noise the circuit shows when you get results back i.e. in terms of errors and I'm sure Dan introduced this we kind of often characterize errors as being one of two types as either coherent errors or incoherent errors coherent errors are errors where it's the kind of the exact same error every time so if I say to my quantum computer I want to do a rotation instead basically of angle 0.3 a coherent error might be that every time it actually does that it runs 0.32 instead and that's just a calibration fault but that means if I run a lot of rz rotations on the same qubit by the time I've done a few in a row suddenly the quantum state I want to actually receive is quite far away from the one I actually wanted on the other hand we talk about incoherent errors which are probabilistic errors so it might be that with some probability when I run an rz gate I actually run an rz gate of 0.32 followed by a y gate a poly y gate in practice one time I might do a poly y gate one time I might do a poly y gate or etc and so by the time I've run this gate often even though there's still some kind of error at her ring that error is in terms of like distance in my Hilbert space closer to the quantum state I actually once in then in the first place and so that is a kind of a tool people are aware of that attempts to account for the noise not in the way you say where it's kind of a well calibrated noise model for the actual device that we can then account for we're actually changing the form of the noise that the device has so that we can then correct for it so when we have these computational schemes the idea would be well in zero noise extrapolation I need to pick the fit at my noise scaling levels how do I know which fit to fit well okay maybe if I add randomized compilation underneath then I know that I've got a static county noise model approximately over the gates and running so maybe that can educate the kind of fit and picking so from a digital level there's less focus on defining really good noise models and more of a focus on defining schemes that can kind of either change the form of the noise and then correct for it but yes when it gets to the hardware level the the engineers working on actually making quantum computers are very interested in the optimized noise models for that day just not on the level we can care about cool thank you very much thank you so maybe one more question hi thank you for the nice talk like how to select the number of short counts with that optimal we need to use so great question thank you it that is in general quite a hard thing to quantify this is something we did well I want to say we consider necessarily if you if you do try to go look at the karma paper and you look at the results you'll see that we fix the number of shots we did through the experiments because we knew it was a factual importance essentially the terms of like kind of the near term experiments you might be running now it is maybe just the approximate rule of thumb which I appreciate isn't a very helpful answer the more qubits your circuit has the more basic states that could be returned by the measurement obviously it's an exponential scaling if you're running a larger spread with more qubits you're going to need to run more shots of that circuit to get accurate results and going up to the order of 100,000 which we've shown here maybe viable for a quantum circuit with many many qubits and I don't know similarly the something like a clear data regression which is applied in the show the amount of circuits you need to run and the precision of the circuits you need to run to get accurate data for actually doing the correction might be very high as well so you might also be going up to an order of 100,000 shots I appreciate that isn't a very helpful answer but the truth is the precision of the relationship between the number of shots you need and the number and the results you get is something we haven't explored NASA yet so please thank you but if we are going to increase number of qubits anyhow we are going to increase the computational cost for the problem so yeah well yeah exactly so as we as we move forward in the world of quantum computing and the device get larger and larger we would be expecting to have to probably run more shots to get things and that is that is a problem we need to consider okay thank you so I'm sure Sahil is going to be online on Slack right to answer any more questions you may have I can be online on Slack yes on Slack just get more questions to him if you have some I can think there was one more question but due to time we should probably now thank our speaker okay thank you and then you know now we have lunch so that's good so get our energy sources back filled up and then