 Hi, everyone. Thank you for coming and continues to come. Let me talk about a little bit of the future. In this session, we are going to tell you about the work we are doing to improve all of our software stack and to watch out the performance of our quantum systems. These developments will be very important as we move more and more into the utility era of quantum computing. As you saw early today, we have a very robust roadmap of larger-scale quantum systems coming in years ahead. And we need to be ready with our software stack. In other words, we have to make sure that the next generation of our software stack aligns with all the innovations that we are doing in our hardware. Today, we are going to present to you some of the new software capabilities that we help us to solve utility-scale problems. So let me divide this session into parts. In the first part, we are going to explore how to use AI and automation tools to build this smart software stack. In the second part, we are going to talk about how we are going to run 5,000 gates circuits in our quantum systems. Let's start with a look at how we are going to combine quantum and AI. In this here, you can see a traditional workflow between classical and quantum computing. Everything we do in our software stack is based in this relationship between classical and quantum computation. At the end, we need to prepare and build the circuits, pre-processing. We execute the circuits in our quantum systems. And finally, we need to post-process all this information, again, in classical computation. In recent years, we demonstrated some ideas about how to create these AI algorithms and run them in our quantum systems. But today, we are going to talk about other things. We are going to talk about how we are going to integrate AI in every of these steps of classical computation to be sure that we can be able to empower and create or improve everything that is related with our quantum execution pipeline. We see the usage of AI tools as a critical path to leveraging the full potential of quantum computing and push past the current boundaries. To follow the following examples, show how we are going to use these smart tools to improve the user experience and make it easier to create circuits, optimize the circuits, execute the circuits, and at the end, get better results. For this, we are going to present to you for our components. And we have been working on these components over the last months. The first component are going to be a code assistant, a quantum code assistant, something that are going to help you to use and learn Kisky, Kason, and our quantum services. The second component is focused on using AI models to optimize your circuits. And after that, we are going to show to you some tools that you can use to combine quantum and classical resources in the most efficient way. Finally, we are going to explore how machine learning can help you to improve error mitigation techniques. Now, to tell you more about this new AI power code assistant, please join me in welcoming my colleague, David Kramer, to the stage. David? Thank you, Ismael. So as Dario showed this morning, we're excited to introduce our AI-powered Kisky Code Assistant. This is a powerful tool that will help you learn the best ways to use Kisky and quantum service, an IBM quantum service, and boost your quantum productivity with code completion and suggestions. The alpha version of this Kisky Code Assistant will be available through a VS Code extension in the first quarter of 2024. The AI Code Assistant has been trained on millions of lines of code generated by the IBM Quantum team and community over the past few years, as well as all of our documentation and tutorials. The model is based on our own IBM Granite 20V model with 20 million parameters and 8,000 context length. And it's fine-tuned with the latest version of Kisky Code and benchmarked with our own human eval data set. With all this information, we've been able to create an assistant that helps user learn to write better Kisky and Casem 3 code. So let's take a look at how this will look for users. As we can see here, we have created a Visual Studio Code plugin that will help users experiment with the Kisky Code Assistant and begin incorporating it into their work. In this case, the Kisky Code Assistant uses Kisky to generate a circuit with an observable and run it through a Kisky Runtime Estimator primitive in one of our devices with 127 qubits. And I am excited to share that we will make the Kisky Code Assistant available to our premium users as an alpha release in the next weeks. Now, let's talk about how we are introducing AI into the transpilation and optimization process. As we also saw in today's keynote, we are thrilled to introduce our AI-powered transpiler passes, secret routing, and circuit synthesis. These passes can be used as building blocks in your transpiling routines, just like any other pass but based on AI. The AI passes usually produce shorter and cellular circuits than the standard transpiler passes in Kisky and are also much faster than the optimization methods such as SAT solvers. Let's take a look at routing. This is a benchmark we did on random circuits of depth 3 on 127 qubits. Here, Kisky level 3 transpiler does a good job in routing, but can we do better? This is our level 1 AI routing, our level 2 AI routing, and hold on, our level 3 AI routing. That's a 30% improvement in circuit depth. Now, let's take a look at the AI circuit synthesis. This is a benchmark we did on synthesizing random Clifford blocks of 8 qubits with linear connectivity. Kisky Clifford synthesis already generates a good circuit. Kisky level 3 improves this a bit, but can we do better? We are here. You know the gun. So this is our level 1 or 3 AI synthesis. This is around 70% better in C not count. And this just took 1 second per second. So now let's walk through an example of how you can integrate all this AI power into your transpiling workflows. So we will start by looking at an example of how we will normally transpile a circuit. This is that circuit. A three-layer random circuit on 127 qubits. It looks really ugly. So typically, these circuits are hard to transpile. And for doing that, we are going to use a Kisky pass manager with a layout stage and a routing stage. This adapts the circuit to the connectivity of the selected quantum system. So transpiling usually introduces a large overhead on the circuit, but this is already very good. But maybe let's get more serious about optimization. To use the new AI pass, we just need to swap the saver routing pass with the AI routing pass and use it as we would use any other pass. So this will run our AI pass on the cloud, seamlessly integrated into our transpiling workflow. Here, we reduce the depth overhead by 40%. And don't worry, our AI transpiler passes won't hallucinate. So your output will always implement your input circuit. So finally, we are happy to introduce our Kisky transpiler service. This service will allow you to transpile and optimize circuits the same way you would normally do it with Kisky by running on the cloud. And even better, you will also be able to include the AI transpiler passes in the process for an even stronger optimization. The standalone AI transpiler passes, along with the Kisky transpiler service, will be available as an alpha to our premium users starting today. So just scan the QR code. Next, to talk about how we're leveraging automation for better resource management, please join me in welcoming to the stage Jennifer Glick. Thank you, David. So this morning, you heard about the progress on our development roadmap with Kisky patterns and quantum serverless enabling managed execution of workloads. In this section, I want to show you the next evolutions of quantum serverless and how we're enabling automation for more efficient resource management as part of our innovation roadmap. As we move towards utility scale workloads and beyond, we need tools that help us effectively manage and get the most out of both classical and quantum compute resources. We'll especially need to rely on more automation to help us select the best quantum resources for a particular workload. We're taking steps towards automating the selection with software tools that work together with quantum serverless for more efficient workload execution. Here, we'll see the software interfaces that we're building that unify how quantum resources can be automatically allocated. These decisions are made based on higher level criteria desired by the end user, such as by system availability or by optimal mapping of your circuits to the least noisy subset of qubits by searching across a set of backends. And in the future, these interfaces will allow for more advanced custom selection, such as with AI. Let's see an example of how resource management with quantum serverless will work together with Kisky patterns. So in this example, we will automatically select a back end from those that are available to us that corresponds to the least noisy subset of qubits for our circuit. We start with a generic Kisky pattern. Step one generates an abstract quantum circuit from the classical inputs. This circuit is not yet connected to or optimized to a particular back end yet. Let's zoom in to step two. In the second step of Kisky patterns, we optimize the abstract circuit for execution on a target back end. This time, however, rather than manually specifying a back end, we use the QPU selector from quantum serverless to automatically select a back end for us based on qubit quality. The selector does this by looking across all the back ends that are available to us and using transpilation of our circuit to the least noisy subset of qubits on each of them. The target back end that is selected and the optimized IBM quantum instruction set architecture representation of that circuit is then passed into step three of a Kisky pattern for execution with the primitives. I'm excited to announce today that you can now start exploring resource management through the examples that you've seen here with system availability and qubit quality. At the same time, we are building out the interfaces to bring you even more powerful and custom resource management to leverage the advances we continue to see with the hardware and software. Next, to talk about how AI can help us minimize errors, please welcome to the stage Iskandar Siddikov. Thanks, Jim. So I'm going to be talking about combination of machine learning and quantum error mitigation and how bringing those tools together can offer many benefits to our users. AI made a tremendous advancement in recent years developing new methods, models, and architectures for solving variety of problems. But can we leverage AI capability to negate the effects of errors in quantum computation? Each approach to quantum error mitigation has its own benefits, but there is always price you have to pay. Some error mitigation techniques are more scalable but require extra shots, which affects runtime. Others are precise, but besides extra shot, also requires a lot of runtime classical computation. Ideally, we want high precision and scalability while reducing runtime. We're exploring new error mitigation techniques with machine learning, which helps us hide the cost that comes with other approaches. It is possible because we are shifting all of the mitigation costs onto heavy training phase where most of the compute is happening. Training is heavy, but the inference is fast with comparable precision and scalability. Now, let me show how it works. Of course, you know that your model is only as good as your data set is, and the rest is your classical machine learning workflow. First, we take our circuits and backend and encode them. Then we execute our payload against error-mitigated backend to get both raw and mitigated results. We run it all through the model, we compute loss and adjust the weights. We repeat this procedure many, many times until we get our converged trained model. Now, let's see how users can leverage those trained models within the Qiskit patterns. We'll focus here on a post-processing step of our workflow since that's where model inference occurs. First, we load our model, then we encode our circuits and backends. We pass it all through the model alongside noisy values, and as a result, we get our mitigated values back. And that's it, just a few lines of code. Now, I would like to thank you and please join me in welcoming back Ismail Faro to the stage. Thanks, Iskandar. As you can see, we have an ambitious plan to apply AI to different aspects of quantum computing. And we are bringing this vision to life on other areas like device calibration, error correction, as much, much more. Now, we want to shift gears away from AI and talk about another area that is playing a big role in driving the improvements on our software stack. Our goal to run circuits with 5,000 gates. To do that, and to conclude this session, please join me in welcoming Abhinav Kandala to the stage. Thanks, Ismail. So, we want to run 5,000 gates circuits, but we also want to access reliable, accurate computations from these circuits within the next one year. How do we do this in the absence of fault tolerance? The key is a collection of techniques that we refer to as error mitigation. Methods that can enable access to noise-free observables on quantum computers today. And as we showed earlier this year, this can be done at a scale that's in general beyond exact classical simulation. This was work that appeared on the cover of Nature in June this year where we presented evidence for the utility of quantum computing before fault tolerance. The largest circuits considered here were with 127 qubits with a circuit depth of 60 using up to almost 3,000 gates, which is in general at a scale that's beyond exact classical simulation. At the time of publication, we didn't quite know if the data points the quantum computer produced for these largest circuits were quite reliable. However, following subsequent classical simulations from world-leading groups, we essentially saw that in the absence of an exact solution, these various methods disagreed from each other by about 20%, and the experimental data points lied within this variance, giving us confidence in the device and the error mitigation methods. Since then, things have progressed pretty quickly in the past five months. Earlier today, we announced our updated roadmap, and it shows that we're really going to be focused on expanding the reach of error mitigation to even larger circuit volumes, going from 5,000 gates to 15,000 gates over the next five years. So what's this going to take? Error mitigation methods typically rely on running many noisy circuits and then combining the outputs from these circuits in post-processing to produce noise-free estimates. However, there's no free lunch, and there is the associated runtime overhead of actually executing these many circuits. The game, though, is that as we push down the error rates on the device, the circuit overhead for error mitigation will get smaller. Simultaneously, as we increase the speed of our systems, we're going to be able to run more circuits per unit time. So advances in both quality and speed are going to be extremely crucial as we push along our roadmap to expand the reach of error mitigation to increasingly large circuit volumes. I now want to highlight some of our recent progress on the hardware and the software along this path. First, we've been trying really hard to get the techniques that enabled the utility experiment into the hands of users like yourself so you can leverage these tools and start to explore interesting problems in circuits. The utility experiment relied on an error mitigation technique referred to as zero-noise extrapolation, right? And the typical workflow here is you first want to define your quantum circuit and then characterize the noise in gate layers in the circuit of interest. Once you have the representative noise model, you can then begin to manipulate the noise in a way that you can effectively amplify the strength of the noise. The task then is to measure the expectation values at these different noise levels. Then you can combine these different outputs and extrapolate to what the result of the noise would have been, what the result of the computation would have been in the absence of noise. As you can see, this is state-of-the-art research with a fairly involved number of steps. To make things easier for users and accelerate the discovery of quantum applications, we've simplified this workflow. So all you need is some easily accessible bits of code to reproduce experiments from the utility paper. I strongly encourage you to join the Practitioner's Forum session later this week where my colleague Chris Wood will be talking more about these tools and how this is easily applicable to your own circuits of interest. In addition to making these tools more accessible, we're also working very hard to improve the speed of our systems with fast parametric updates. Large error mitigation experiments can involve running hundreds of thousands of circuits that are individually parameterized by Z gate angles. For this, we're transitioning from our current execution model that involves recompilations for each individual parameterized circuit to a significantly more runtime-efficient model where a single compilation step is sufficient for executing the many thousands of parameterized circuits that are critical for these workloads. So speed is a very important aspect of the improvements we're making, but so is quality. In this context, you've already heard about the improvements in our new Heron architecture with up to a 3x improvement in median two-qubit error rates over the Eagle device that was used in the utility experiment. And I now want to give you a small taste for the power of exponentials and a sense for what this enables in our latest set of utility experiments. With these improvements, we've now been able to address simple extensions to the trotterized icing circuits that were considered in the utility work that can lead to significantly faster entanglement growth, even with the same number of two-qubit entangling gates. This can be done simply by introducing these single-qubit X gates between each of the ZZ entangling layers in the utility circuits to break commutivity. Furthermore, as a test of device performance, we rely on Loschmidt echoes where we run the circuit of interest followed by its inverse and ask the question, does the qubit recover its original magnetization, the original spin? For these non-commuting circuits with faster entanglement growth going up to circuit depths of 30 on 111 qubits, we see that on Heron, we're able to produce experimental signal at challenging parameter angles where the corresponding signal vanishes on Eagle. With this signal, we're then able to aromitigate and recover what the expectation value should have been in the absence of noise. So, the bottom line is that we have this amazing new computational tool, but what I find truly remarkable is in just five months following the publication of the utility paper, we've made this tool even more powerful. We've reduced our two qubit error rates. That takes us further along our roadmap of 5K and beyond. We've increased speed. That enables us to run more circuits per unit time. We've improved overall device quality. That lets us address even more challenging circuits. And perhaps most importantly, we've started to make these advances more accessible to you, our users. These advances have placed us firmly in this new era of utility. Now let's work together and find the problems in circuits that will get us from utility to advantage. Thank you.