 Hello, everyone. Welcome to laying the groundwork for quantum powered use cases. I'm Sarah Sheldon, senior manager of quantum theory and capabilities at IBM Quantum. Today, I want to show you some of the remarkable work that researchers at IBM and many other organizations are doing with utility scale hardware. As you heard this morning, we have now entered a new era of quantum computing, where noisy quantum devices are able to produce accurate expectation values in regimes that are beyond brute force classical computation. We believe this is a new era of exploration, and we can start to think about quantum computers as a valuable tool for doing scientific research. But this raises an important question, namely, what can I actually do with a utility scale quantum computer? We have always said that there are two steps to achieving quantum advantage. First, we have to get to larger circuit sizes, and then we have to map those circuits onto relevant use cases. But making that leap from circuits to use cases remains a challenging problem. So today, we're going to talk about some of the work we've been doing alongside our clients and partners to create the building blocks that will help us bridge that gap. Part of this is breaking down problems into the Qiskit patterns, our four-step process for running problems on a quantum computer. You've already heard from Blake today about how we're doing this by improving our performance in step two, where we optimize quantum inputs, and in step three, where we execute our circuits using Qiskit runtime primitives. In this session, we're going to look at 13 demonstrations, an actual baker's dozen of demonstrations that show how IBM and other research organizations are optimizing components in step two through different kinds of error mitigation techniques or circuit optimizations. We'll also see examples of how we're adding in additional resources for step four, where we post-process results classically. So what I would like to do today is invite these speakers onto the stage who are going to talk about what comes next. These speakers will focus on three key research threads, all of which are helping to expand what we can do with utility-scale quantum computers today. First, we'll hear about how we're extending circuit volumes and going to different circuit structures. Then, we'll hear about new capabilities we're developing that incorporate dynamic circuits. And finally, we'll hear about how users of our systems are optimizing circuits for use cases that have practical relevance to scientific problems. But before we dive into these three research threads, I'd first like you to hear a bit more about the results of our original quantum utility paper and the follow-up work that we've been doing since that research was first published in Nature in June of this year. So please join me in welcoming to the stage UC Berkeley PhD student, Sojant Anand, who will give his perspective as a Tensor Network simulation expert and original author on our quantum utility paper. Hello, my name is Sojant Anand, and I was one of the Berkeley researchers who worked on the classical simulations for the quantum utility paper IBM published in collaboration with UC Berkeley earlier this year. Today, I'd like to take a few minutes to summarize these results and their subsequent progress over the past six months, while also providing my perspective as someone who works on classical simulations of quantum computing. To begin, let's briefly recap the general framework of quantum dynamics on digital quantum circuits. As shown in the schematic on the left, one begins with an easy-to-prepare initial state performed some sequence of one and two qubit gates and then measures each qubit in some desired basis. The experimental and theoretical difficulty of such a circuit is determined by the number of qubits, the number and types of the gates, and the order in which these gates are applied. In the circuits we will consider, the single qubit gate was parameterized by a single qubit angle theta, which controls the difficulty of the simulation. In the utility paper, we demonstrated that for a 127 qubit depth 15 circuit, using error mitigation techniques developed both by IBM and by the quantum computing community more broadly, the quantum computer produces results that agree within error bars with exact results for a high-weight non-local observable. As seen in the figure to the right, error mitigation is crucial to get accurate and useful results at these depths and qubit numbers. We also found that one of the standard approximation methods for simulating quantum dynamics fails to reproduce the exact results for reasons that are well understood. However, as was quickly pointed out by the community, other approximate approaches can and do reproduce these exact results to machine precision. Now, moving beyond exactly verifiable circuits, we measured a local observable as a function of single qubit angle at depth 60, four times deeper than the circuit we discussed on the previous slide. For the two single qubit angles where the exact result is known, so that is a theta of zero and pi over two, the quantum computer agrees with the exact results. However, when we moved some generic single qubit angle, so between zero and pi over two, at this depth no exact result is available and we find that the quantum computer generally agrees with a plethora of approximate yet reasonably well-converged numerical approaches proposed by the quantum community shortly after the paper appeared. This broad agreement between classical and quantum gives us confidence in the accuracy of the quantum computer and error mitigation techniques utilized, while also giving credence to the approximations employed by the numerical algorithms. Ideally, moving forward, we will see the classical and quantum approaches working together symbiotically to benchmark and improve one another. Now, having demonstrated the error mitigation enables accurate quantum computation with large qubit numbers and depth, we wanna demonstrate that sizable improvements in quantum computing capabilities can be achieved with the hardware improvements discussed earlier today. On the 111 qubit devices from different generations, using a non-standard qubit connectivity due to some still-effective qubits, we can measure a lochment echo-like quantity which is thought to be a stringent test of quantum hardware. Essentially, as shown again in the schematic on the left, we start with the all-zero state, we apply some unitary and then it's time-reversed copy, essentially undoing all of the computation we just did. Ideally, this should just be an identity operation, so if we measure our state, we should get the original input that we put. In this case, we're gonna measure the Z magnetization and we expect a true answer of one. One finds that the new device discussed earlier today, Heron, can much more accurately capture the theoretical value of one in the previous device, EGLE, which was used in the utility paper. So as one tunes the single-cuban angle theta from zero to pi over two, one finds that the Heron device performs much better and we can also use approximate numerical techniques to estimate the extent of the device involved in the quantum computation. And we find that significant regions of the device are involved in the computation, meaning that we're doing some significant quantum task. This indicates that modest quantum hardware improvements allow for accurate computations at greater depths and circuit complexities than previously considered. To conclude, I wanna raise an important and open question that is actively being investigated by the broader quantum community. One overarching goal for noisy, near-term quantum computers is to identify quantum circuits and observables that are both experimentally feasible while also being classically challenging, even before the realization of fault-tolerant error correction. Considering the utility circuits that we ran earlier this summer, we find that the original circuits have relatively slow entanglement growth as measured by numerical simulations of the out-of-time ordered correlator shown in the left most figure. However, with minimal modifications of the circuit in which the number of two qubit gates is held constant, while adding some additional single qubit gates to remove circuit structure, one finds that this OTOC grows significantly faster. And as many classical approximation techniques become less effective with increased entanglement, such circuits may be worth investigation experimentally. However, I do wanna caution that for generic quantum dynamics, local observables will have a vanishing experimental signal with increasing depth, as shown in the right most figure, showing that blindly increasing circuit depth is not necessarily a viable approach. Now this raises the question, where should we look for interesting circuits beyond generic thermalizing dynamics? Well, in the remainder of this session, you'll hear about several different examples of interesting areas to explore, from adaptive quantum circuits to long-range entanglement generation to fundamental physics dynamics. Thank you. And now, please join me in welcoming IBM's Sarah Sheldon back to the stage. Thanks, Sajant. So as you can see, this paper was just the first evidence of quantum utility. And we've already begun to reach beyond the space, into the space beyond that original paper. So moving forward, what comes next? Well, it's going to look a lot like these three research threads that I mentioned earlier, extending circuit volumes, incorporating dynamic circuits, and optimizing circuits for interesting use cases. So with that in mind, let's take a closer look at these three research threads, starting with the first. To do that, I am pleased to introduce a group of speakers who will talk about how they're extending circuit volumes and improving results with alternative error mitigation techniques and error suppression techniques, as well as with the aid of additional classical resources. These methods all tie directly into step two of the Qiskit patterns, optimizing quantum inputs. And this plays an essential role in getting to larger problem sizes and circuits that are harder to simulate classically. Now to start us off with a look at a new performance management solution for Qiskit runtime users, please join me in welcoming founder and CEO of Q-Control, Michael Beersik. Good morning. It is a real pleasure to be here. My name is Mike Beersik. I'm the CEO and founder of Q-Control. And it is a tremendous privilege and honor to talk to you today about something really tremendous in our sector, a true milestone for quantum computing, introducing the world's first natively embedded performance management software for quantum computing. This is the outcome of a relationship we've had with IBM for more than five and a half years since we joined the inaugural class of the IBM Quantum Startup Network. What we have built is a completely autonomous AI-driven error suppression workflow that goes from the lowest levels of the stack, AI-based replacement of all of the native hardware machine language, all the way through to AI-driven error robust compilation. All of these techniques are built to be interoperable. All of them are completely automated. And now they are accessible within Qiskit with one single command that you can see there. This is the first of its kind, independent software vendor integration into IBM Quantum. And it's more important because of what it delivers. First, it delivers orders of magnitude of improvement, more than 10x improvement in circuit depth that you can execute, more than 100x in cost savings to users because these technologies are deterministic. They don't have any overhead. They're different, but complementary to error mitigation and more than 1000x enhancement in performance. And I know these numbers look too good to be true, but they are all in the published, peer-reviewed technical literature. And the best part is all of this is accessible, no extra cost on the Pego systems with 27 and 127 qubits available right now. But what I wanna tell you about for the rest of this talk is what you can do with access to this new automated error suppression. And so we're very pleased to show you some exciting outcomes, looking at an algorithm called QAOA. And in particular, executing full hybrid implementations of QAOA at the scale of 50 qubits in just 90 seconds. Now I wanna emphasize we are in no way cheating. We are not pre-compiling. We're not selecting the best circuit and executing it once. We're doing full hybrid execution with no classical pre-processing in 90 seconds. The problem that we look at is called Max Cut. This is a machine learning problem that's very widely used in machine learning classification, VLSI circuit design and the like. It involves taking what's called a graph and trying to partition it in order to maximize a value called the cut value. Now at utility scale, what's very exciting is that we can go beyond brute force. Brute force gives us a sampling of outcomes. When we add QAOA executed with the Q control performance management on IBM Quantum, we shift the distribution to the values that matter, the large Max Cut values, by more than two and a half sigma. But at the large values, the ones we care about most, the boost over brute force or the default implementation is more than 32,000 times. We can go further. We can look at problems where there's been published literature on competitive platforms. Here again, we're running Max Cut. But the combination of IBM Quantum and the Q control performance management lets us take these graphs from the published literature, execute in full hybrid mode and get the correct maximum cut in all of these cases in circumstances where the competitor was not able to find the Max Cut. But in addition, we can get that right answer with full hybrid execution in 30% less time than it took the competitor to run a single circuit. When you look across the breadth of applications that are enabled, the numbers are truly astounding. What we've built is completely generic and agnostic of the algorithm you want to run. We've shown this 32,000 plus times boost in QAOA. We've shown that with an algorithm called Bernstein-Vasarani, you can achieve 99.9% confidence that you have the correct answer with less than 5,000 executions, 5,000 shots. And more recently, we've been able to show world records, for instance, in achieving the largest ever GHZ state. GHZ state is a form of maximally entangled state. We've seen up to 60 qubits in this particular set of experiments. All of this is unlocking the performance inside the IBM system. So if you'd like to get access, it is available right now at no extra cost, but you can get in touch with us to learn more about this technology. And we're very excited for what it brings to you as users of this platform. Next, to talk about quantum computing's role in the future of high-performance computing, please welcome to the stage computational scientist at Argonne National Laboratory Yuri Alexeev. Thank you, Michael, and hello, everyone. We expect that in the future, high-performance quantum computing and high-performance classical computing will complement each other and be more powerful together than either one alone. Currently, when the phase where classical computing can improve quantum computing, as well as we enter the utility scale era, we are going to be seeing how quantum computing will enable classical high-performance computing as well. To achieve these goals, we developed a method called operator-back-propagation method, where we use classical computing to enhance the capabilities of quantum devices. Our method increases the circuit volumes by allowing us to compute the result of the deeper quantum circuits. To demonstrate the capabilities of our method, we used the classical supercomputer polaris located at Argonne National Laboratory and the 127 qubit Eagle Quantum Device, as shown on these slides. Okay, on this slide, next slide, we demonstrated the capabilities of our method for Hamiltonian simulations using both classical computing and quantum computing. We considered utility scale circuit for Hamiltonian simulations. In the plot here, where x is unmitigated signal and y-axis is the maximum c-note depth of quantum circuits. We are looking at the circuit depth that can achieve using a quantum device alone, so this is only just quantum devices, and given certain unmitigated signal. For example, if a year are small enough such that unmitigated signal is at least 0.8, the maximum depth is six c-notes. We are looking at the left column here. But if you dedicate a part of computation to classical computing, you can compute the outcome of much deeper circuit, given the same unmitigated signal. For example, if unmitigated signal of 0.8, the maximum depth will increase from six c-notes to 42 c-notes. This is on the left column, it should be 42, okay? This is seven times increase in the depth of a quantum circuit. This is our main result. It significantly increases, obviously it significantly increases the depth of a circuit. However, it's important to note that this improvement will show diminishing returns and the capabilities of quantum devices increase in the future. And we also note that if unmitigated signal value of 0.4, it's enough actually to perform PA. Again, this slide will show how high level overview of how we combine quantum computing and classical computing. Given a quantum circuit, we split it in two parts, U1 and U2, as you see on the slide. The idea is that we compute the first part U1 on a quantum device and compute the second part U2 on classically and then stitch results together. So what we do, we start with U2 first and then after that we compute U1. So that's why we call it back propagation. So we start not from left to right, we start from right to left. This is why it's back propagation. Okay. In particular, for U2 computation, we use our method to figure out a new observable O prime obtained from a similarity transformation of original O observable. The measuring code O prime on quantum device is equivalent to measuring the original observable O. The computation of O1 is generally very, very computationally expensive and this is where we leverage the power of classical computing to our advantage. It's important to mention that there are several different methods to perform operator back propagation. For this project, we use Clifford and Tesla network quantum circuit simulators. We have plans to continue refining these techniques for depth increase, using high performance computing and to demonstrate the ability to improve quantum hardware experiments. In the future, we expect that classical high performance computing and quantum resources will be even more tightly coupled. On this slide, we have a diagram showing how we run operator back propagation experiments and way house done is by distributing jobs between high performance computing system or cloud service and IBM quantum devices and we use a quantum serverless. It's important for the convenience and seamless use of our method. So this is just a high level overview but if you want to learn more, there will be two more presentations coming up at the practitioners forum events. In the next two days, we shall give you more information about this project. On Tuesday, Mint Run will dive into details of operator back propagation technique which we used for these hardware experiments, I just shown. On Wednesday, Bryce Fuller will show how quantum serverless can be used to afloat operator back propagation onto cloud computing resources. This will be part of large middleware session. Thank you very much for your attention. Okay. And now, I'd like to welcome to the stage Sabrina Montescalco. Thank you, Yuri, and hello, everyone. Error mitigation is an essential part of near-term quantum computing. At Algorithmic, we have developed a tensor network error mitigation method, or TEM, using tensor networks to counter noise in post-processing. TEM works efficiently because it's generally easier to simulate classically just the noise affecting a quantum circuit than the circuit itself. In fact, this task becomes increasingly easier as the level of noise in the device decreases. In TEM, we construct a tensor network representation of the inverse noise map. This is the quantum map that when applied to the state produced by the noisy circuit gives us an output, the ideal noiseless state. Of course, in practice, we cannot work with complete multi-qubit states. It would be unaffordable. What we use instead are informationally complete, generalized measurements. This form a seamless interface between quantum and classical computers. We can simply feed the outcome of the noisy devices into the tensor network codes to produce mitigated estimates of observables of interest. This allows us to bypass the unsurmountable measurement cost of working with multi-qubit states. And in fact, one of the main advantages of TEM is in its measurement shot efficiency. We can prove that with respect to other state-of-the-art methods, TEM has a measurement overhead which is quadratically smaller. And also, it is very robust, more than zero noise extrapolation in the low signal to noise ratio regime. In order to test TEM on hardware in collaboration with the group of John Gould at Trinity College Dublin, we have identified as ideal test bet the kicked icing model in transverse field. This is a model that exhibits indeed interesting features both from physical and from computational point of view. We studied the system at infinite temperature and in particular the two-time dynamical correlation function because this is the key quantity to understand transport in condensed matter systems. Since it's a discrete dynamics model, its circuit implementation is exact. Moreover, the same circuit structure can be used in different parameter regimes. In one limit, the circuit is so-called dual unitary. It is classically simulable and exactly solvable. And this is ideal to understand and verify that the noise-mitigated results are actually correct. However, with different model parameters and then single qubit gates in the circuit, the model becomes non-integrable and the dynamics difficult to simulate classically both in the Schrodinger and in the Heisenberg picture. This is the regime where quantum computing can unveil new physics. We have recently tested them on hardware in collaboration with the Gruppo Vivano Tavernelli at IBM Zurich. And we have simulated the model at the dual unitary point where it is simulable and solvable. We're using 50 active qubits and computing the two-time dynamical correlation function that you see here in the slide. In practice, 18 more qubits were used to reduce cross-talk effects. The Maximalimix state was produced by a sampling in the computational basis states on all qubits, but one. We consider four different time steps, time equal to zero, 17, 33, and 49 in the dynamics with corresponding circuit steps between zero and 98 layers of synods. The deepest circuit that you see here in the slide contains 2,402 synod gates. The expected signal in the ideal case is rather simple. The correlation function should vanish everywhere except on the light cone. That is on qubit 17 at time step 17, at 33 and so on, where its value should reach one. So in this plot, you should see peaks reaching one for those qubits. However, due to the presence of noise, the signal decays. And in particular, in the last point, point 49, the expectation value nears zero, becoming practically indistinguishable from the background qubits. However, in all cases, time manages to recover the correct signal within three sigmas. Of course, the error increases, the error bars increases for increasing circuit depth, which is expected because the signal becomes increasingly noisier and to mitigate this effect, we increase the number of measurement shots. But all in all, these results demonstrate the potential of TEM to mitigate errors in large scale experiments with existing IBM quantum devices. With TEM, we expect to seize quantum utility for real use cases in the near future. Thank you for your attention. And now to continue discussing the role of error mitigation in extended circuit volumes, I'm pleased to introduce the next speaker, co-founder and chief scientific officer of Ketma, Dorit Aronov. Thank you, Sabrina. Hello. Good morning, everyone. Almost noon. I'm Dorit Aronov. And I'm a professor at the Hebrew University for computer science. And I'm the chief scientist of Ketma Quantum Computing. And I'm really excited to be here today to talk to you about Kessam, our error suppression and mitigation software product. So let me start with a few headlines of what Kessam is all about. And then I'll dive into details of some pretty impressive demonstrations so everyone here can join my excitement. So first, Kessam is a software that makes your quantum algorithm run as if the quantum hardware is noise free. In other words, Ketma's software essentially eliminates the major problem of noise when running these quantum algorithms on current devices. Second, Kessam is application agnostic. Namely, it has guaranteed performance for any quantum algorithm and not just for a specific or custom made one. Thirdly, our product is hardware agnostic and it was successfully tested on a variety of hardware platforms. And finally, and very importantly, we ran our software in IBM's Eagle device, Brisbane, and demonstrated the largest unbiased error mitigation to date with prospects for generic quantum advantages real soon. So how does it all work? Our software takes as input a quantum algorithm and applies our magic sauce of high accuracy and very fast characterization, calibration and compilation protocols and produces several quantum circuits and these circuits are then executed and their outcomes are post-processed to give the final, highly accurate result of the desired quantum algorithm. By the way, our product Kessam, the name of it is not only the acronyms of quantum error suppression and error mitigation, but it also means magic in Hebrew. Now, it's really important that Kessam produces unbiased results. What this means is that it recovers the ideal behavior of the given algorithm up to unavoidable statistical errors as if the algorithm was run on a noise-free device. Okay, at the bottom, you can see two demos of on IBM's Falcon processors that demonstrate how fast Kessam is for Hamiltonian simulation and amplitude estimation algorithms. In the amplitude estimation, for example, Kedwa software used only three hours of QPU time, which is more than a thousand times faster than the leading competing unbiased method of PC, which would have taken four months to complete the same task. This is just the beginning, though. So let's move on to IBM's large devices, namely the 100 plus qubit Eagle processors. To evaluate performance, we use here a metric which we call active volume. It's very natural metric. And we think it's particularly adequate for generic error mitigation protocols. The active volume is defined as the number of C-naught gates that significantly affect the measured observable. This is the volume one would need to perform error mitigation on when it is unknown how the noise propagates through the circuit, which is the case for generic algorithms. The active volume is closely related to how complex it is to perform classical simulations. So in our demos, we applied QSM to charterized or kicked circuits for the Ising model, and we chose to work with pi over four ZZ angle, which is non-clifford. And in general, it is challenging for classical simulations, bringing our circuits closer than previous demos to generic quantum circuits. Okay, so now here are some demos. The left plot is a demo with string operators of different length measured after three charter steps. And on the right, you can see six charter steps with two point correlations with different separations between the two points. And each data point here is the average of identical observables over the 40 qubit chain. Okay, so you can see in red here that the noise causes a very drastic decay in the observable's expectation value, indicated by the large values of the decay rate lambda. So the larger lambda is, the more challenging air mitigation becomes. Nevertheless, QSM's results in blue managed to recover very nicely the ideal behavior given by the gray line. In these demos, the active volumes are 88 and 108. Okay, so let's move now to larger active volumes to get a feel for just how powerful this really is. Here you can see demos with seven and 10 charter steps, reaching active volumes of roughly 370 CNOT gates. Okay, so now this is really exciting. Since to the best of our knowledge, these are the largest active volumes ever achieved with unbiased air mitigation. Okay, so note that these demonstrations required a wall time of under three hours each. And we're done on IBM Brisbane, whose fidelity is 98.5%. The value of the fidelity is really important because for machines with larger fidelity, the same air mitigation, the same very air mitigation protocol would yield significantly larger volumes. For example, when moving to a fidelity of three nines, 99.9%, the 370 active volume reached on Brisbane here would translate, would be equivalent, to a vastly larger active volume of over 5,000 CNOT gates, which would already be well within the quantum advantage regime. Okay, so what should we expect going forward? So the plot you see here visualizes the growth in Kedmai-enabled active volumes as we use IBM devices with increasing fidelities. In fact, quantum advantage is already achieved here with fidelities of 99.9% and a few hours of QPU time, as you can see with the little star there. This means that on machines with three nines fidelity, Kedmai provides access to generic quantum circuits which cannot be simulated by classical supercomputers. Okay, so such abilities are transformative for quantum algorithm designers or quantum computational scientists as we learn in the morning. Since they can now use their quantum computers with Kedmai software to test how their quantum algorithms work and how they would scale with the input size. In the table, I listed this and other benefits which Kedmai software provides. Okay, so let me summarize. If you have or considering buying access to a quantum computer, Kedmai software would empower you to run your quantum algorithm on it better and faster. So come by and catch me during one of the breaks or just let's just continue discussing this over coffee. I'd love to do that or just drop me an email. Thanks for your attention. And now please welcome Sarah Sheldon back to the stage. All right, thank you Doreed and thank you to all of our speakers of this first research thread. All of these error mitigation and error suppression techniques are essential for getting to larger circuit volumes. And all of this work really complements the research that we're doing at IBM as well. And as we'll see later in the quantum software session, you'll hear more about our error mitigation techniques that we're working on in-house and all the work that we're doing to make these applications, make this work more accessible to you. And speaking of the incredible capabilities we're building under the hood of our systems, let's shift gears to our next research thread. This group of speakers is going to deliver a number of demonstrations on how we're incorporating dynamic circuits, a powerful capability that gives us greater flexibility in how we write quantum programs and which can help us achieve quantum advantage with short depth circuits. So to get us started and give us an overview of what dynamic circuits are and how they work, please join me in welcoming the first speaker of this group, IBM research scientist, Elisa Boymer. Thanks Sarah and good morning everyone. Before diving into the application of dynamic circuits, let us quickly discuss how dynamic circuits differ from regular quantum circuits and why they are useful. So a regular quantum circuit is a sequence of unitary gates acting on qubits followed by measurements in the end. Dynamic circuits are quantum circuits with some additional features. First, mid-circuit measurements, which allow to measure our qubits not only in the end, but at any time in between. Second, we can perform classical calculations in real time based on the measurement results of such mid-circuit measurements. These are subroutines in many quantum algorithms and will ultimately be essential for future error correction protocols. But the benefit of calculating in real time comes with a third feature which allows to apply feed-forward operations, meaning that we can apply any gates conditioned on the outcomes of our classical calculations within the coherence time of our qubits. But why is that actually beneficial? A famous application that requires dynamic circuits is quantum error correction, where they enable to detect errors in real time and directly correct them to continue the quantum algorithm. Dynamic circuits are also required for interactive protocols, verifying a quantum computational advantage, as well as convolutional quantum neural networks. In this session, we will also see its advantage for circuit knitting, which includes techniques that allow partitioning of large quantum circuits into subcircuits that fit on smaller devices, as well as for preparing certain highly entangled states for quantum simulations to describe, for instance, quantum phase transitions. In the following, I will focus on the creation of long-range entanglement, where we use the fact that classical calculations and feed-forward can also be used as information transfer to spread correlations faster. This allows dynamic circuits to establish entanglement between arbitrary qubits in a shallow quantum circuit. Why do we care about long-range entanglement? The creation of long-range entanglement is a task at the heart of every quantum algorithm. Algorithms usually start with the preparation of states that possess multi-partite long-range entanglement and then require many long-range entangling gates due to the constrained local connectivity of the quantum processors. As you will hear more about state preparation later, let us focus now on the latter task and take as an example the CNOT gate teleportation. The goal here is to apply a long-range CNOT gate over a 1D chain of N qubits that are subject to nearest neighbor connections only. The standard way of using only unitary gates would essentially require to swap the two outmost qubits all the way to the middle, thus requiring a circuit depth that scales linearly with a number of qubits N. Using dynamic circuits, however, we can teleport the gate in constant depth, which means that the time it takes to perform this task does not increase with a number of qubits. Thus, especially as we are scaling up, we can see a drastically improved circuit depth. Now let's take a look at how this theoretical concept performs in practice. In our experiment, we have teleported the CNOT gate using up to 101 qubits across an entire quantum processor, benchmarking the performance of dynamic circuits against the most competitive unitary ones. While we can see that the unitary implementation in light blue gets better fidelities for less than approximately 10 qubits, dynamic circuits here shown in pink quickly outperform it as we increase the number of qubits. This shows that over large distances, CNOT gates are indeed more efficiently executed with dynamic circuits than with unitary ones. Using this positive result, we propose as an outlook to use gate teleportation as a workaround to enable effective all-to-all connectivity for limited connectivity, large-scale devices. By connecting all system qubits, shown in pink with each other via a bus of ancillar qubits, shown in light blue. This would offer a possible solution to one of the main hindrances of current architectures. So I hope with this, I managed to convince you that dynamic circuits are a promising feature to help overcome current limitations of hardware as they could drastically improve the circuit depth, with the effect becoming more significant as we scale up the number of qubits. Thank you. Next, to discuss how dynamic circuits are playing a role in enhancing our circuit knitting capabilities, please welcome IBM Principal Research and Research Scientist Micah Takita. Thanks, Elisa. Hi, everyone. Today, I'm going to introduce a new circuit knitting technique that incorporates dynamic circuits. Circuit knitting is a set of technique that employs multiple circuits and classical post-processing. In particular, I'm going to talk about a technique called gate cutting, which allows us to use virtual gates to simulate non-hardware native topology and even systems with more qubits than physically available on a single device at the cost of simulation overhead. Imagine you have a target topology you want to simulate. Here's an example graph with 103 nodes with periodic boundary conditions. This will fit on the single IBM Eagle processor, but it requires few long-range gates. You can enable these long-range gates using swap gates or by teleporting a two-qubit gate with dynamic circuits, as Elisa described earlier. Here, we show that we can implement a virtual gate with quasi-probability decomposition, where you can decompose a quantum channel into a sum over quantum channels that only uses either local operations or the combination of local operations and classical communications, which is enabled with dynamic circuits. This technique using local operations has previously been demonstrated, but here, I will demonstrate a new technique using classical communication that was introduced by our fellow IBMers. Gate cutting protocol with LOCC involves a gate teleportation circuit, that same kind of circuit you would use for long-range entanglement where you consume a bell pair. Here, instead of the actual bell pair we see here in green, we use cut bell pair factory. In this example, we can rebuild two bell pairs by running 27 different circuits of this form. Now, using this protocol to verify the methodology, we created a graph state and checked entanglement witness on each of the edges. We claim success when the witness W is negative, which signifies an entangled state. Looking back on the target graph, we compare three methodologies. The first, the dropped edge case, which generates a very similar, but not quite the target graph. This only uses three layers of two-qubit gates, but does not connect these four pairs of distant qubits. Full graph generates the target graph, but it requires many swaps. Finally, we implement virtual gates using LOCC and only with this method, we achieve 100% success rate, observing entanglement witness across the whole graph on all 116 edges. Now, what can you do if we have two QPUs that are classically coupled? By using circuit knitting with LOCC, we can now generate a graph that has more qubits than we would have access to in a single device. Creating this 134 node graph is not possible on an individual ego processor, even if we allow many swaps. We show that we have 100% success rate with LOCC, succeeding in all 143 edges, whereas this would be impossible with other methods. Dynamic circuits gives us new capabilities in gate cutting, where we can generate simultaneous bill pairs efficiently to be consumed in a teleportation circuit. With this new technique, we can start imagining more use cases where the problem doesn't map on an available topology or doesn't fit on a single device. Thank you. Next, to talk about how long range entanglement is enabling some exciting research in many body physics, I'm very pleased to welcome Gua Yi Zhu, postdoctoral researcher at the University of Cologne to the stage. Thank you, Micah. Hello everyone, I'm Gua Yi Zhu from the University of Cologne. Now, let me guide you through a useful case of applying the IBM quantum processor to study some fundamental many body physics, an entanglement-free transition in a shallow quantum circuit. When we measure a quantum bit, it always collapses the wave function into a classical state. But if we entangle three qubits by local unitary gate, before we measure the middle qubit, we carry out a teleportation experiment which creates a bell pair between the two distant qubits. Now, we take this as the building block and repeat it many, many times simultaneously. As a result, we can reach a glassy Greenburger-Horn-Talinger state where all the qubits are entangled as a whole, forming a macroscopic Schrodinger's cat. To do this, we need many auxiliary qubits that serve as the bridges for the entanglement. Then a practical question follows, is the quantum state stable or scalable when the circuit is subjected to small but finite errors? Is there an entanglement-free transition? Through the mass, we learn that the naive one-dimensional circuit is not scalable, but the two-dimensional quantum protocol can host a stable phase. What is the nature of the phase transition here and how to verify that in experiment? The IBM quantum processor with up to 127 qubits in a hexagonal ray is a perfect place for us to address these questions. Now, let us dive a little bit into the detail of implementation. We divide the qubit into two groups, the blacks for the system qubits and the grays for the auxiliary qubits. Then we try to color the bond such that we can apply unitary gates to cover all the and silver qubits and all the bonds within three cycles. Then we projectively measure out all the gray qubits in the middle and feed the outcome to a classical computer to process. The remaining black qubits can form a long-range entangled state that is decodable. By post-processing, we decode a sub-bimodal distribution of the black qubits. Here, the magnetization relates to the number of qubits that align to the same direction. By scaling up the system sizes up to a full use of the 125 qubits, we see an enhanced bimodal distribution. To have a bigger picture of the applicability of the protocol, we manually inject errors to the circuit which allows us to probe the phase diagram. This includes the coherent error that distorts the interaction between two qubits and the incoherent error that corrupts the communication channel between the quantum device and the classical computer. As a result, we map out such a phase diagram. There are two remarks that follows. First, we find a line of thresholds that predicts when the protocol would fail. As you can see, the bottom right corner holds the long-range stable phase, which is at the same long-range order as the Greenberger-Hontalinger state. Unfortunately, the Sherbrook device is below the threshold where we could succeed. Secondly, a deep fundamental physics emerges naturally as a gift to us. The quantum state is described by the same math as the famous Nishimori line from the spin glass model. Along this line, the temperature and the randomness have to maintain a delicate balance. In the classical world, the critical point shaded in blue disc is a single point. To hit this, we need to fine-tune the temperature and the randomness. In the quantum world, we have an entire line that shares exactly the same physics. Therefore, we never miss it as long as our threshold is our quantum device is below the threshold such that we can cross the border and hit the criticality. Thank you for your attention. Now, for our next speaker, who will talk about error-mitigating dynamic quantum circuits, please welcome the IBM staff, research scientists, Riti Gupta. Thank you, Gowee. As we've heard from other speakers, dynamic circuits are powerful and increasingly in demand for quantum computation. By using dynamic circuits, we reduce the resource cost for preparing high-quality complex quantum states. In dynamic quantum computation, we measure some subset of active qubits and we use this measurement information to change what quantum operation queue we want to perform next. Real-time processing for this measurement information has to be fast. These feed-forward operations can only take several hundred nanoseconds, requiring extremely sophisticated control systems engineering. We face practical challenges in using dynamic circuits. For instance, we don't understand the noise we induce on active qubits when we're performing mid-circuit measurements and feed-forward, and any additional noise could erode benefits that we seek to gain from these operations. Realizing the full potential of dynamic circuit capabilities means that we need to understand and mitigate errors in these operations. This year, we've made substantial progress in overcoming these challenges. First, we've introduced a procedure for learning noise and circuits containing mid-circuit measurements and feed-forward. These learning benchmarking circuits on the left give us information about noise in our hardware, shown as decay curves to the right. Second, we use this noise information to mitigate errors using PEC, and we call this measurement PEC or MPEC. To show that MPEC works, we focus on a foundational building block for dynamic circuit capabilities. In this test circuit, we encounter virtually any component of dynamic quantum computation, including gates, mid-circuit measurements, and different types of control logic. Actually, what I'm showing here is a type of logic gate called a tefoil gate, and in an experiment, we measure half of its truth table. These truth table lines are highlighted in blue, and we plot input states along the X-axis. For all input states, the mitigated output of this circuit using MPEC is closer to the ideal value than the raw data in blue. But we don't just have to measure the truth table, we can zoom in on specific input states, and as before, the raw data in blue degrades quickly, while the mitigated output using MPEC in orange does not. We now show that MPEC can be applied to a greater qubit number. Here, we prepare Bell states on 12 qubits, requiring 16 logical cases to be rapidly processed in real-time feed-forward. We also prepare a 42-qubit identity circuit with 28 C knots and 14 mid-circuit measurements, and as before, as we extract increasingly complicated information from left to right along the X-axis, we see that the mitigated output in orange stays close to unity, while the raw data in blue decays rapidly to zero. This work positions all of us to use error-mitigated dynamic quantum circuits. Thank you for listening, and I'd like to call Sarah Sheldon back on the stage. Thank you, Ritty. It is so exciting to think about how researchers are going to put these capabilities to use in the future. So now you've heard about how we're getting to larger circuit sizes and how we're adding greater functionality to our circuits. Our final group of speakers is going, will show us some remarkable demonstrations of how researchers are putting all of the pieces together to look at different scientific problems. These experiments leverage steps two and four in the Qiskit patterns to optimize circuits and get more accurate results for quantum use cases. As we'll see in the next four presentations, these experiments could be helpful for solving a wide array of scientific problems. So for our first example, a look at new quantum computing methods for studying discrete time crystals, please join me in welcoming research scientist, Nicolas Lorente. Thank you, Sarah. So our work centers on discrete time crystals in quantum systems, and the exploration of disorder and many body localization using noisy quantum computers. This work was done in the Basque country in northern Spain by myself and two of my Basque colleagues, as well as with our collaborators at IBM Quantum in Dublin and New York. Why are we interested in time crystals? Because they represent a new state of matter in the time domain. You're probably familiar with different phases of matter like metals, insulators, superconductors, but now we can also think about how matter behaves when things are changing in time. One example is when you are ready to matter. If you're ready to matter in a continuous way, you expect that something is changing and you can eventually drive your system into a new state. Now the problem is that when you do that, you will reach infinite temperature. To avoid that, you need to reduce the number of degrees of freedom that you can access in your system. We do this through many body localization. This is many body localized interactions that are local and have disorders so they cannot spread out. In this way, we can reduce the way information, interactions, energy, and eventually entanglement spreads in our system. So how does a time crystal work? The image we see on screen here helps explain what a time crystal is. At a certain moment, we have a set of magnetic moments spin aligned in a certain direction and then we apply a spin field pulse that is going to change the spins from up to down. And at the same time, we let the system relax to find its equilibrium or its steady state by being subjected to the many body localization interactions. Next, we apply another spin flip that will change the spins again and repeat the same process. What we find here is that if we separate the spin flips, by a time t, the actual period of your system is not t. It's actually 2t because you need double time to reach the same initial state. When we look at the signature of the system, the periodicity in time, it turns out to be double and the frequency is half, which is what you see in this graph here. So the properties of these time crystals are that you have super harmonic dynamics, this new frequency I'm talking about. But you also have what I was talking about before, zero heating. We reach critical equilibrium and the system really acts like a crystal in the sense that you have rigidity, you have temporal and long range order. But how do we actually do this? The general idea is to introduce this order interactions between spins to avoid entanglement. And we need to be able to measure this entanglement to be sure that we are really creating this many body localized state. To do this, we define a new entanglement witness which is the quantum feature information. And the quantum feature information measures how spins depend on each other. This has a direct representation in qubits and we can afterwards try the system that we know is in low entanglement to try to reverse spins and create our time crystal. So the above steps are giving us access to the 10 crystal in a noise quantum computer. In a noisy quantum computer, we will be able to create all these things. So the first step that we are aiming at is to produce a reliable quantum feature information estimation on a noisy computer. Our colleagues in Ireland and also here in New York are doing an excellent job applying their technology in Hamiltonian simulations, dynamics particularly, to reduce the depth of the secrets that are required to produce this kind of simulation. In this graph, what you're seeing is in the Y axis a correlator between two adjacent qubits in the middle of your 50 qubit Hamiltonian which in this case is the Heisenberg Hamiltonian. These are the kind of results they can obtain with a new technology that enhances the typical product formulas into a multi-product formula. In black, you have the exact results of this system and in red and blue lines, you have another three and another two trotter algorithm exploiting tensor networks. And you see that of course, the other three is much closer to the true ground state. In dash lines, you have the results of a noisy quantum computer using again the same algorithms three and two and in purple, you have the results of the new method that approaches the true ground much more closely. So the key features on these techniques are that the error that you have in the trotter scheme is quadratically suppressed and we can actually have much better precision without increasing the depth. Thank you very much for your time. We're excited to show what we can do in this collaboration between the Basque country and IBM. Next, to discuss quantum methods for problems in many body physics, please welcome IBM research scientist Oles Stanko to the stage. Thank you, Nicholas. Hello, everyone. My name is Oles Stanko and as you may know, quantum many body physics is a branch of science that studies interactions of many particles on a quantum scale, such as atoms, molecules, electrons, photons, et cetera. And following the original idea of Richard Feynman, you can translate the motion of these particles into changing state of qubits which you can engineer to study arbitrary quantum systems. In addition to simulation of the specific systems, we can also discover general physical laws that governs these dynamics. A good example of physical laws is conservation laws. You may be familiar with conservation laws like conservation of energy or conservation of momentum. Those are very useful laws that can help us to solve many classical mechanical problems. In quantum mechanics, finding conservation laws can be useful as well for simplifying and solving problems. They can also be used to distinguish different states of matter to discover new physical phenomena as well as sometimes to describe quantum dynamics fully. Let me illustrate it how it looks like in terms of quantum hardware. Let's say we have a collection of qubits represented here by circles. When we measure the signals from individual qubits, we may find that the certain signals are not independent. In fact, summing them with certain weights yields a value which is a constant over time. That will be conservation law. However, finding this conservation law is non-trivial task and requires some techniques and imagination. To formally study this problem, we create a circuit that has conservation laws. We call them local conservation laws because they apply to signals from small local groups of qubits. And we know that this local conservation laws are present in the system because the circuit we create mimics the material that exhibits many body localization. Let me explain the structure of the circuit in more details. As usual here, time flows from the left to the right. And for a simple input state, we apply a number of identical blocks where each block contains the same set of gates. And we made the gates to be as close as possible to native gates of IBM quantum hardware which makes the circuit very efficient to run. After we designed the circuit, we implemented it using available publicly available Qiskit tools. So anybody here who has access to IBM quantum hardware can repeat this experiment. Finally, we run 350,000 plus circuits on 124 qubits, including error mitigation we had, providing us with a full portrait of the system in terms of local conservation laws. Let me show some results. In the moving image on the top, you can see a comparison between circuits that has conservation laws and circuits that doesn't. And here each cell represents an individual qubit while the color of this cell represents qubit state. As you can see, the system that has conservation laws tend to retain some information about its initial state while the system without quickly thermalizes. So this persistence is something which allows us to extract conservation laws from the system dynamics. The lower plot shows the structure of the typical conservation law which we also call local integrals of motion. In particular, the x-axis which is in parallel to the slide represents the weights we need to multiply the signals on the first slide such that they form a constant. And this weights decay exponential away from the position of the certain central qubit here. From this direction which faces more to us, we see that the weights should decay exponentially as we sum signals from more than one qubit. There are three important results that we get from studying this problem. First, we discovered a new way to learn quantum systems. This is a much more efficient and alternative to protest tomography in terms of number of data points we should obtain from the circuit. Second, we found a new physical predictions about two-dimensional many body localization. Finally, during the course of this project, we discovered that this conservation laws can be used as a new method for testing the capabilities of our devices without the need of direct classical simulation of the device. This gives us the hope that in the future, quantum computing methods like this will allow us to study the system in a way that will surpass the capabilities of classical algorithms. At this point, I would like to thank you for your attention. Our next speaker will talk about how quantum computing could be helpful for solving common problems that appear in many different scientific disciplines. Please join me in welcoming University of Tokyo Research Associate, Nobu Yoshioke to the stage. Thank you, Alice. Chemical reaction rates, material science, high energy physics. What do these have in common? At the heart of these fields is a ground state problem. A ground state is the most energetically stable state in the system, which is extremely important since it dictates the behavior of the system at low temperatures. While it is known that quantum dynamics simulations are already hard for classical computers, ground state problems are even more challenging. We believe that there are many useful models whose ground states can be simulated efficiently on quantum computers. Ground state problems can be solved in various ways using quantum computers. In fault-torn quantum computers, we can run the phase estimation algorithm while for noisy quantum device, we have predominantly used the variational quantum eigen-solver or the VQE. While these algorithms are widely investigated, they are known drawbacks. Phase estimation needs excessively long dynamic simulation, and the VQE requires a significantly large number of function calls. Here we want to present that the best way of utilizing the current quantum noisy device is the quantum Krylov subspace methods. Quantum Krylov methods also use dynamics as a subroutine and they have convergence guarantees as in phase estimation. On the other hand, unlike phase estimation, we believe that we can get interesting results beyond classical, even with circuits that are not too deep. Furthermore, quantum Krylov methods have the advantage that we can solve the problem with exponentially small memory consumption in comparison to its classical counterpart. Now, let me explain the quantum Krylov method we used. We first prepare some trial state with some shallow quantum circuit and then operate Hamiltonian dynamics such that trajectories of time evolution would interfere with each other. For instance, if we interfere the i's and j's dynamics trajectories by the circuit shown here, we obtain the ij entry of a matrix that encodes the information of the system. When we consider d trajectories in total, we obtain d by d matrix, whose size is significantly smaller than the original problem size. These matrix elements can be measured efficiently on quantum computers, while it requires exponential memory if we want to compute with classical computers. Given the matrix, we solve a generalized eigenvalue problem, which allows us to estimate the ground state energy. In our experiment, the quantum circuits are noisy, and therefore we use error mitigation technique known as the probabilistic error amplification. By boosting the error rate of the circuit as one, 1.5, and three, for instance, we retrieve a zero-noise result by extrapolating the values. For those who are interested in more in details, please also attend the track three of the utility workshop tomorrow presented by Mirko Amiko. Last year at Quantum Summit, we showed that by using this quantum Krylov method, we were able to run our algorithm on a spin model of four qubits. This year, by using a newly introduced 133 qubit device, Heron, we have scaled the simulation up to 57 qubits. You have shown that by increasing the dimension D of the sub matrix, we can systematically improve the ground state energy estimation. Here we see a decent convergence to the exact lowest energy within single particle sector of the system. Okay, now that we have the tool for the nearest neighbor models, we envision using it to look further into more and more utilities in complex problem setups. We may investigate various doping such as a half filling or introduce further neighbor interactions that potentially induce prostration into the system. More directions can be explored even beyond ground state problems. We can study excited states that are important for investigating phenomena such as the homo-lumo transition in molecules. And we may also tackle dissipated quantum systems that describe directions in biological systems. Finally, we may also simulate finite temperature phase diagrams in materials. So this concludes my talk. Thank you for your attention. Now to discuss how quantum computing can play a role in study of cosmology in high energy physics, please welcome University of Washington Professor of Physics Martin Savage to the stage. Thank you, Naboo. So from the earliest moments of the universe where matter was created to the densest matter we find at the center of collapsing supernova to the highest energy collisions we create in the laboratory found in cosmic rays, physicists strive to understand the nature of matter under extreme conditions. Spectacular advances in science and technology in the 20th century brought us quantum mechanics, it brought us quantum field theory and it brought us the remarkably robust standard model of nuclear and particle physics. However, our understanding of matter is incomplete. We do not know where the matter anti-matter asymmetry comes from. We do not know the phase diagram of matter as the temperature and density are raised or increased far from where we have experimental control over. And we don't know how these systems equilibrate. We've developed sophisticated theoretical and computational tools with which to make predictions from the standard model but as Feynman made clear in order to simulate quantum systems at scale one needs to simulate them with smaller quantum systems where one has control of superposition and entanglement. We're focused on systems of fundamental particles where quantum mechanics plays an essential role in their structure and dynamics making them prime candidates to achieve an early quantum advantage. We're now just starting to develop our understanding and techniques and the necessary collaborations to simulate quantum field theories with quantum computers. This is taking the form of identifying a portfolio of less complex problems that share some of the key features with the target problems. For example, simulating quantum field theories defined in one and two dimensions in order to make progress towards simulating in three dimensions. Electromagnetism in one spatial dimension has emerged as one such theory of utility. While limited in some ways it shares important features with quantum chromodynamics the theory of quarks and gluons defining the strong interactions. Quantum mechanics is responsible for the vacuum rearranging itself to screen electric charges confining them at low temperatures to electrically neutral composite objects. The same phenomena that occurs in the strong interactions where the quarks and gluons are confined into protons, neutrons, pions and much more. The first step towards simulating processes and interacting systems such as collisions and dynamics is to build the quantum vacuum. Our philosophy is to include as much physics as we already know to minimize the computational tasks required of the quantum computers. Confinement of charges is used to reorganize the Hamiltonian and hence reorganize the quantum circuits. We also use the fact that nature is Lorentz invariant to develop quantum circuits that can scale from small to modest to large numbers of qubits and hence lattice sites. This physics awareness was combined with new simulation algorithms and mitigation techniques including a DAPV QE to prepare the vacuum in a scalable way. Here are some of the results of the simulations using IBM's 127 qubit quantum computers. It shows the chiral condensate or in this model the number of particles in the prepared vacuum. The squares show the raw results and the points with error bars show the results after error mitigation. The dash line corresponds to the classical expectation. It should be clear that this result, sorry, I should be clear that this result does not represent a quantum advantage but is an essential step on the road to achieving a quantum advantage in this theory. So what is the path forward towards a quantum advantage? We look towards the challenges faced at the highest energies and densities, colliding high energy protons or nuclei together produce a high multiplicity of lower energy, composite and fundamental particles with probabilities that are well beyond first principles analytic and classical calculations. Now that the vacuum can be easily prepared, we are presently at the earliest stages of creating composite particles on top of this vacuum and colliding them together. As the collision energy increases, so does the complexity of the simulation and it eventually becomes intractable for classical computing. The lower left-hand panel shows where we are today while the lower right-hand panel shows an event from the Large Hadron Collider at CERN which is where we want to go to. This research area is a great example of the fact that it's a fully integrated strategic co-design effort including high performance computing, quantum information science and technology and fundamental physics that is required to take us to where we want to go. Thank you. So now to conclude the session please welcome back Sarah Sheldon. Thank you Martin. So we've heard from a lot of different researchers in this session who are exploring what they can do with today's utility scale quantum computers but what is the message that you should take away from these demonstrations? What do these experiments tell us about the current state of quantum computing? In the keynote earlier today we learned that until recently research with quantum computers was entirely focused on small scale proof of concept experiments. But what we've seen in this session is a fundamental shift to a new era in the history of quantum computing. One that challenges us to discover what we can really do with a utility scale quantum computer. We've seen researchers pushing into the 40 to 50 qubit range where classical methods begin to struggle by extending error suppression techniques and finding new ways to combine quantum and classical resources. We've also seen others moving even further into the 100 qubit range where exact classical methods may fail by exploring dynamic circuits and by mapping new problems from fields like condensed matter and higher G physics. And that's exactly what we mean when we talk about the era of quantum utility. The era of quantum utility marks the transition of this technology into a tool for true scientific inquiry. A tool that could help us access groundbreaking discoveries and insights. So Dario's talk this morning reminded me that this week is also my 10 year anniversary at IBM. And in December of 2013 we had our first IBM Quantum Conference at the Watson Research Center in Yorktown Heights. And the topic of that conference was a question. What would you do with a quantum computer on the scale of 100 qubits? And now that we can not just imagine but we can start to test out the possibilities I want to leave you with this question. What will you do with a utility scale quantum computer? Thank you.