 Can you hear me? Is that all okay? Yep, I'll take my mask off, that might be a bit better as well. Okay, so hi, I'm Gemma from Durham University and I'm going to tell you about a method we've developed by using copies to improve precision for continuous-time quantum computing. And we've also recently got a paper up on archive, so if you want to read more about it, then that's the link up there. But before I tell you about how we improve the precision, I need to tell you about why we need to improve precision. So what it essentially boils down to is a problem of expectation versus reality. When a theorist comes along, they'll have the expectation of wanting to solve an optimisation problem. An optimisation problem is a problem where the solution is encoded in the minimum energy state or the ground state of this particular problem. And they come in various shapes and sizes, but what we like to do is represent them on ising models. And what you see here is a simplistic, very simplistic ising model, so we have two qubits here. Each of these qubits have a field applied to them, and they're connected by this coupling shrimp here. And to access the energy levels of this ising model, we can write down the Hamiltonian, which will look something like this. And as our optimisation problem becomes more and more complicated, we might find that the fields and the couplings that we want to have on our ising model become more and more specific. And unfortunately, this is where reality may have to step in, because when we try and represent our optimisation problem on a quantum computer, then we're then limited by the resolution of the fields and the couplings that we can actually set on this quantum computer. The upshot of which is that if we're trying to represent this field like this, what we may actually end up setting is sort of being close to this. And this is not ideal at all, because essentially every time you're changing the fields and the couplings, you're shifting around the energy levels. If this happens enough, you may change the actual ground state of your problem, so when you do something like your quantum annealing to find the ground state, you'll end up getting a completely different answer to your optimisation problem than you want. So how do we go about stopping finding the wrong answer? Well, a naive approach might be that if you get the wrong answer the first time, you could simply just try again and then you might get the right answer the second time. But we want to do better than this. So following on from an idea we first found in this paper here and actually was presented at AQC 2019, we start off by connecting two copies of our ising model together. This is where our technique starts to diverge, because instead of connecting these two copies ferromagnetically, we connect them anti-ferromagnetically, so that the spins on either side of this anti-ferromagnetic link want to oppose each other. We also set the strength of our anti-ferromagnetic links to be the minimum allowed value by the resolution of our quantum computer. We also still treat these two individual copies as almost a set for copies. If we're finding the ground state, we only need one or more of these copies to be correct for us to consider the computation of success. On top of this, we add a further third copy and connect it in this triangular configuration. The idea behind this is that it helps prevent error propagation between the copies. So how did we test how our technique worked? First off, we swapped out this too simplistic two-cubit ising model for more difficult to solve Sherrington-Cupatric spin glasses, which were also studied by other members of our group in continuous time techniques in this paper here. We went from five cubits to nine cubits spin glasses, and I haven't drawn in the H's and the J's here, but we still have random fields and couplings on each of the cubits, and these black lines represent the couplings. These fields and couplings can be randomly drawn from the interval minus one to plus one. Next, what we do is we generate 10,000 of these Sherrington-Cupatric spin glasses. We then subject them to a lack of resolution. That is that we define a number of allowed values between minus one and plus one, and that the fields and couplings have to round to, and these number of values are defined by two to the p plus one, where p we call the precision. We then vary this precision from one to ten. So the idea behind that is as you're increasing the precision, you're getting closer to the optimisation problem that you actually want to represent. Once we've applied the lack of resolution, we measured something we called fraction-correct for the singular instances. That is the fraction of times when finding the ground state, the fraction of times once subjected to a lack of resolution, that we still found the correct ground state. Just to note that in all these cases, we found the ground state using a classical branch and bound technique. Once we've done this for the singular instances, we then did this again for our free anti-fermanetically connected copies, finding the fraction-correct. Remembering that we only need one or more of these copies to be correct in order for us to consider the computation a success or let it add to our fraction-correct. We could then also measure the breakdown of fraction-correct, i.e. the fraction of times we had three copies correct, two copies correct, or even just one copy correct. We then could plot this versus the precision. This is what you see here. On the left-hand bars you see the results from our singular disconnected instances. You can see that as the precision increases, we find that the fraction-correct also increases. This is as we would expect because as the precision is increasing, we're getting closer to the optimisation problem that we want to represent. On the right-hand bars you see the results from our three anti-fermanetically connected copies. You can also see that as the precision increases, we see an improvement in fraction-correct. What's interesting to us is that between just having the singular copies and the three connected copies, we see an improvement in fraction-correct indicating that our technique is working. What we can also see is that a lot of the improvement from our technique is coming from those instances which have either two or one copies correct, indicating that the anti-fermanetic links have a part to play in it. We next wanted to look at the effect of our technique on individual instances. The fraction of times our technique had no effect on the singular instances. The fraction of times our technique had a positive effect, i.e. the singular instance was incorrect but then was made correct by using our technique. Finally, we wanted to see what fraction of instances our technique had a negative effect. The fraction of times the singular instances were correct but then were made incorrect by using our technique. What we found was this. The left-hand bars again showed just the fraction-correct from the singular instances as on the previous plot. On the right-hand bars, we've essentially changed how we're breaking up the bars. What we see here is that the improvement from our technique, as you might expect, is coming from those instances where our technique is having a positive effect. Happily for us, the number of instances that our technique is having a negative effect on is fairly small. It was also pointed out to us that alongside our free-connected copies, we could even include this small improvement if we wanted to access the full improvement. We could have an extra external copy and therefore see when our technique is essentially breaking the singular instances. We can therefore measure from the left-hand bars, the gap between the left-hand bars to the right-hand bars to show us the improvement between using a singular copy and free-connected copies. This was initially just for five-cubit spin glasses, but we wanted to see whether this improvement continued for larger spin glasses. That's what I'm showing here. What you can see is the results from five cubits again, seven cubits, eight cubits and nine cubits. The gap between this solid line, which represents the results of action correct from the singular instances, and the dotted line, which represents the results from the free-connected copies, indicates that there continues to be an improvement by using our method up to at least nine cubits spin glasses. We next wanted to quantify this improvement in terms of gain of precision. You can essentially think about this as trying to measure horizontally across from your free-connected copy results to your single copy results to tell at what precision you would need to be at for a single copy in order to get the same results via free copies. To do this numerically, what we did is we essentially plotted one minus the fraction correct instead of just essentially flipped the axes round and plotted an exponential plot to our single instance data. Then we could essentially measure across the distance between the free copy data and the single copy data. Then we could plot this versus the precision to get a graph which looks like this. What we can see here is that at least from p equals two and higher, we can see that as the precision increases, the precision improvement also increases to around about three bits, a gain of three bits of precision at around six or seven p equals six or seven. This indicates that there is an improvement in precision that can be gained by using our technique. In conclusion, I hope I've convinced you that by connecting free copies of spin glasses with anti-pheromonetic links of a minimum strength, we have made our problem more robust to a lack of precision and that we could use this to increase the effective precision of our computations. If you would like to know more details about it, then we have a paper recently up on archive and I'm also happy to answer any questions. Thank you for listening. I have a question. Thank you. That was an interesting talk. I'm wondering about embedding three copies of large cliques in such a way that they can all be connected. On the Pegasus graph, there are these parallel qubits, the paired qubits, and I can see how you can get two embeddings in there pretty easily, but I'm wondering about three embeddings and how difficult is that and does it mess up your chain links and questions like that? At least at the time that we started doing this, we didn't think that connections with triangles would be possible on something like a D wave, but I think since the Pegasus graph has now got some triangles, which maybe can be possible, there's also another issue with embedding in that. We think that embedding is likely to have a negative effect on just even without using our method, it will have a negative effect on just the fraction correct, essentially of just singular instances, so that could potentially not be helpful. Yeah, that's an interesting question. I did have another question. Do you have an explanation for why anti-farramagnetic couplings might be better than ferramagnetic couplings in this scenario? So our initial idea was that it was helping to prevent error propagation. We're not sure because what we do now is essentially it's not random, so it's a deterministic effect. When we're setting our lack of resolution, it's deterministic rather than random. I've lost my thread a little bit there, but what should I say? We could follow up afterwards. Yeah, okay, sorry about that. No worries. I'm a little bit confused about the comparison between embedding of three corpus and having just one corpus, because if you're embedding three corpus, you're definitely spending much more resources than you would need for one corpus rate, so it's probably even more than by a factor of three because you have some embedding cover ahead. Yes, but I have done, at least with this, these bars are essentially equivalent to three copies, sorry, one copy three times, but because of the way we've done our problem now, because it's a deterministic error rather than random errors, there's kind of no improvement from repeating, so you essentially can't see the difference that it makes. Before we had a lot of degeneracy in the way we did it, and you could see the increase in improvement that doing extra runs did, but you can't see it now, unfortunately, but we get more improvement by using this technique, but what I'm trying to say is this is effectively the same, minus the possible resources from embedding, I guess, but this is the same as running three copies of the single instance. Can you again show how did you do the simulation? This was how we did it, but we defined a number of values between minus one and plus one, but these values were randomly generated to be, so within bounds where we set the bounds that were minimum. I'm basically asking if you are computing the ground state or doing something else. We're finding the ground state. No dynamics, yes. Sorry. The work that we did on the quaternion correction with ferromagnetic, as opposed to anti-ferromagnetic coupling, the logic of using ferromagnetic is that you are forcing all these qubits to agree, you don't create any frustration, and in this setting, you're actually introducing a lot of extra frustration that wasn't there in the original problem, which seems like it could be problematic, so I guess I'm repeating Kathy's question. Why do you want to do that as opposed to ferromagnetic? As well as using the anti-ferromagnetic links, from QAC you were enforcing all three of the copies, so you use more than that, but all of the copies within the system needed to be correct, so it was more of an error correction code, whereas this we're allowing copies to be incorrect. Basically, by doing that, by allowing one copy to be correct, it's almost helping the other copy remain correct. That's a very interesting idea, but is that intuition or is that proven? There was similar work using temperature to perform error correction against small errors, so this is a similar idea where you're saying, okay, what are similar optimal solutions or similar near optimal solutions that are sort of nearby, and I think that would be the intuition I would have, is that this is similar to turning on a low finite temperature, but you're doing it by adding couplings rather than explicitly adding a temperature. I also had some intuition about why this anti-ferromagnetic might be better than ferromagnetic, so what I'm thinking is that, say, you just have this two-qubit problem, and one of the, we just consider one control error where the problem has it such that qubit one to the lower qubit, the qubit on the left is supposed to be downwards, but we had some control errors such that the local field on the qubit is too high and it becomes upwards, and now when you have ferromagnetic connections with other copies, this error will propagate and reinforce each other, but when you have anti-ferromagnetic, this frustration kind of stops this error from propagating and reinforcing each other. Understanding, yeah. Okay, I guess there is something to discuss. You understand. Okay, so let's thank Gemma again, and this is the end of today's...