 Welcome everyone. Our objective has always been to empower our clients and users to help solve the world's biggest problems using quantum computing. To do this, we need to run lots and lots of large quantum circuits on high-quality quantum hardware. To do this, we need to increase the performance of the processor, develop a better understanding of how to deal with the errors, and simplify how a quantum computer is programmed. Plus, we're going to need error mitigation, circuit knitting, error correction, and much more built right into a software stack tightly coupled to our hardware. Our development roadmap is the path we plan to take to get to this objective, and we are excited to continue to share its evolution with you. We last shared our development roadmap back in November 2021 at the Quantum Summit. Since then, we've made new discoveries, seen more breakthroughs, gained more insight, and created more innovations than ever before. Every one of these has the effect of making the road ahead clearer, and in some cases, they offer a leap ahead. Today, I have Oliver, Blake, and Katie here, and we're incredibly excited to share a few updates with you. But first, I want to share some context. Quantum computing will never replace classical computing. The power is not held in Quantum alone, but in the combination of Quantum and classical. These combinations will change what it means to compute. In February 2021, we promised we would demonstrate Qiskit runtime and show 100 times speed up in performance, and break the 100 qubit barrier with our 127 qubit Eagle processor. And in November, we did just that. For 2022, we have four major objectives. One, we're going to bring dynamic circuits to the stack. Two, we're going to take the next step in scale and introduce our 433 qubit Osprey processor by the end of the year. Performance is more than just scale. It also includes quality and speed, so it's not enough just to talk about one of these things. We need improvements in all of them. So with that, we're also saying three, we'll demonstrate a Quantum volume of 1024, and four, we'll increase our speed from 1400 to 10,000 klops all by the end of the year. We're incredibly excited about all these challenges ahead, and we'll talk more about them later. But that's not what we're here to talk about today. We're here to talk about something even more exciting. And with that, I'll hand it over to Oliver. Thanks, Jay. As we said back at the Quantum Summit, everything we do on the development road map drives one of three things, scale, quality, or speed. Today, I want to talk about scale and quality. To continue this theme of four, we have four developments to share with you. The first one is right here, introducing the Heron processor. The Heron pushes quality to the next level. We did this with completely redesigned gates and new tunable couplers that allow fast gates while simultaneously limiting crosstalk. Going forward, we see this architecture as a replacement of the fixed coupler devices and the basis for all of the devices I'm announcing today. We're ramping up to be able to control multiple Herons with the same control hardware, enabling quantum computing with classical communications. These classically parallelized systems begin to redefine how we measure scale. So far, measuring scale has been easy. Scale was how many qubits we put in one of our quantum processors. Now, for the first time, we'll be talking about the size of a quantum computer beyond simply the number of qubits that we can jam onto a single chip. So to have this discussion, we're going to have to adjust our roadmap and make some more room for some of the extra hardware innovations we're going to talk about. We're developing a device that allows quantum gates between quantum processors, in other words, a chip-to-chip coupler. This coupler will allow us to run quantum gates between multiple chips in the same cryostat. The connections will be dense enough and fast enough to continue to employ our heavy hex lattice across multiple chips. From the user's viewpoint, the gate speeds and qualities will be close enough to those we have on our chip for our compiler to handle the differences. It will be just like using a single large processor. The technologies that we've developed for chips with large numbers of qubits will still be critical. The larger the component chips are, the simpler and more reliable our systems will be. So when will this be ready? Well, we're aiming for 2024 in our roadmap to demonstrate a 408 qubit device consisting of multiple chips joined by this modular coupler. We're calling this processor CrossBill. So 2024 is going to be a big year, not only because of this, but because of the next thing I'm going to talk about. Given that we want these chip-to-chip connections to be nearly the same performance as our on-chip connections, we expect them to be physically short. So the chip-to-chip distances are not much longer than the distance between qubits on a chip. This means our qubits will remain very dense inside of our cryostat, leaving us cramped for space for all of the other wiring and hardware required by the qubits. To get around this, we have our third development, which is to develop a long-range coupler for connecting qubit chips through a cryogenic cable of around a meter long. This will be long enough to escape this qubit density problem for both classical I.O. and the cooling capacity of our cryostats. We expect this long-range coupler to be much slower and lower fidelity than our on-chip gates because it involves a physical cable. These long-range coupler connections will be much less dense. Each chip will only have a few connections to other chips. Because of this, our users' programs will have to be aware of these long-range couplers to take advantage of them well. This will allow us to explore interesting topologies of quantum systems. So I said 2024 was going to be a big year. We're aiming to demonstrate these long-range couplers in the roadmap as a 1386 qubit device in 2024. We're calling this processor Flamengo. But that's not all. We're planning to bring all of these technologies together in a single system, and that brings us to our fourth development. This system will use the short-range chip-to-chip couplers with modular classical I.O. and then long-range couplers. And we'll make a 4,158 qubit system. Yeah, we like round numbers. And we're calling this system Cougarburra. And we're planning to do this by the end of 2025. But the reason I'm excited about this system isn't that exact number of qubits. I'd be perfectly happy if it was 4,157, in fact. But because with the Cougarburra 4,157, in fact, but because with the combination of these technologies, multiple processors, chip-to-chip couplers, and long-range couplers, we will have the basic tools to scale our computers to wherever this roadmap takes us. There will still be a lot of work to do to simplify programming and to get these processors to run quantum circuits. And with that, I'm going to hand it over to Blake. Blake just took us through. How does this connect with the layers above? We recognize we don't have just one kind of quantum developer. We have an ecosystem operating at three levels. The kernel, algorithm, and model developer. Each developer creates work that helps feed the layers above. The kernel developer focuses on making quantum circuits run better and faster on real hardware. The algorithm developer uses these circuits within classical computing. The model developer uses these applications to find useful solutions to complex problems in their specific domain. Let's start with what we are building for kernel developers. Well, our objectives for this year don't change. We're still on track to deliver dynamic circuits. In fact, we will start enabling dynamic circuits on exploratory systems by May. Dynamic circuits couple real-time classical computation with quantum operations. They allow for feedback to change or steer the course of future operations. This is very different from the world of static circuits where all decisions must be frozen in before circuit execution begins. Dynamic circuits are very powerful and fundamental to useful quantum computing. This is because they extend what the hardware can do by reducing circuit depth by allowing for alternative models when starting algorithms and by enabling the fundamental operations at the heart of quantum error correction. Dynamic circuits are a sophisticated capability that requires advances in several fronts of our technology stack. They require new control hardware that can move data with low latency between different components while maintaining tight synchronization. And our next generation control platform enables this at scales covering all processor generations Oliver talked about. Dynamic circuits require a language for describing them, and we have worked with the broader quantum community to develop the open chasm 3 circuit description language for this purpose. Dynamic circuits require a new compilation stack to convert open chasm programs into executable form. Consequently, we're developing an open chasm 3 native compiler. This is an exciting domain for research and development and we're looking forward to converting this activity from a purely theoretical endeavor into empirical one this year. As powerful as these circuits are we need more than a circuit execution capability to handle real workloads. Rather, we need systems that address the need of quantum and classical computations working interactively. In 2021, we demonstrated the Qiskit runtime with a containerized execution environment co-located with our quantum hardware systems. With this, we showed up 120x speedup for an example of research grade workload but we see this as just the start. We can now simplify use of quantum systems and deliver enhanced performance by introducing new computational primitives powered by the Qiskit runtime. For the many new users who will come to quantum computing over the next few years, these primitives will be the bedrock of their programming experience. The unique power of quantum computers is their ability to generate non-classical probability distributions at their outputs. Consequently, much of quantum algorithm development is related to sampling from or estimating properties of these distributions. The Qiskit runtime primitives are a collection of core functions to easily and efficiently work with these distributions. The first two primitives we are making available are the sampler and estimator. The sampler collects samples from a quantum circuit to reconstruct a partial probability distribution of the output. This is useful for search applications such as Grover's algorithm. The estimator allows users to efficiently calculate expectation values of operators. These operators can be used in a variety of applications. For instance, they can represent the electronic structure of a molecule, the magnetization of a spin material, or the kernel of a machine learning problem, and much more. By elevating these primitives to the core interface of our quantum systems, we are paving the way for easier algorithm and application development. But we won't stop there. We'll use the foundation provided by these primitives to enhance the performance of quantum systems. The first way we will do that will be by enabling threaded runtimes. Oliver showed you our plans to introduce classically paralyzed quantum with the Heron processor in 2023. In the same year, we'll update our Qiskit runtime primitives to be able to execute on multiple hardware systems, including automatically distributing work that is trivially paralyzable. Next, we will enhance primitive performance at the service level, with low-level compilation and post-processing methods. For instance, we will introduce error suppression tools such as dynamical decoupling to reduce crosstalk and extend coherence in circuit execution. We will add error mitigation techniques to deal with imperfections in measurements and gates. As an example, probabilistic error cancellation yields a provably accurate quasi-probability distribution at the cost of exponential runtime. Despite that cost, with sufficiently low noise, this can still beat classical methods. Of course, we are still actively pursuing quantum error correction, which would eventually appear at this layer too. These advanced primitives will allow us to deliver Qiskit runtime services as an API used by algorithm developers who can now focus on using quantum circuits and classical routines to build quantum workflows. In these workflows, an algorithm developer may need to break apart a problem into a series of smaller quantum and classical programs, and rely upon an orchestration layer to stitch the data streams together to construct the overall workflow. We call the infrastructure and tooling that enables this way of working quantum serverless. The serverless concept is a powerful paradigm to enable flexible quantum classical resource combinations without requiring developers to also be infrastructure experts. For example, a developer may want to combine a GPU cluster with a quantum system. We want to enable flexible infrastructure configurations with simple to use code concepts, so that a simple change in a source file is sufficient to access a variety of compute resources. We demonstrated a proof of concept of quantum serverless at our Quantum Summit in 2021, but in 2023, we will integrate this concept into our core software stack and show how it can enable new functionality such as circuit knitting. Let me hand things over to Katie to tell you more about how we expect algorithm developers to use this functionality. For quantum computing to enable more and more users to tackle harder and harder problems, we need to build higher level tools. Optimizing between classical and quantum resources is going to be key to quantum advantage. We need tools where we can develop better error mitigation and eventually error correction procedures that we will then upload into an estimator or sampler primitive runtime program. Kiskit runtime is perfect for this, but we also need tools that can simplify the classical and quantum integration, and these tools will be provided through quantum serverless. This year, we demonstrated a method called entanglement forging for trading off classical and quantum resources to solve larger problems. Entanglement forging uses a well known linear algebra technique to expand the quantum state into smaller quantum circuits. By truncating the number of subproblems, we can double the size of the system we can address with a fixed number of qubits. Related techniques like circuit cutting and embedding break down large circuits in different ways, and we call this collection of methods circuit knitting. All of them use a model where a larger problem is broken down into smaller pieces that run on a quantum computer and then are knit back together with a classical computer. The combined computation provides a solution of the larger problem. For the algorithm developer to use these methods, we're going to need a circuit knitting toolbox and we need quantum serverless to provide the infrastructure for distributing resources between quantum and classical. These circuit knitting ideas require running lots of circuits using classical computers to modify the circuits and then running lots more. So it would be great to have multiple quantum devices running circuits in sync to speed up the solution. This is why we need the threaded runtimes Blake mentioned and the multi-chip systems Oliver mentioned. Furthermore, if we can use the measurement outcomes from one device to influence the future operations on others, we can potentially make quantum advantage happen even earlier. To do this, we need the classical communication in real time between these independent processors. As we consider problems that we expect quantum computers will allow us to solve, there are certain type of circuits we will use for many classes of problems, like circuits for sampling. In simulations of quantum systems, we will be interested in studying dynamics and we will need circuits for time evolution. As we build out our tools under quantum serverless, we will incorporate these optimized circuits into libraries. Finally, on top of the rest of our stack is application services. At this layer, we're building the software for model developers who use runtime and serverless to address specific use cases. This will enable software applications for these domain experts to bring quantum algorithms and data together. Starting next year, we will begin to define these services with our first test cases. As these prototypes mature, we will continue to develop our quantum application services. Here, we'll work with our partners who will help us accelerate our path to software applications. Our first step will be to integrate machine learning and kernel algorithms into model developer applications. Back to you, Jay. Thanks, Katie. So our 2022 development roadmap has everything we already promised, but now with new breakthrough technologies and milestones ahead. In 2023, we're demonstrating quantum computing with classical communication and introducing our Heron processor and threaded runtimes. This is an addition to the 1021 qubit condo processor we've already talked about. The following year, in 2024, we're demonstrating chip to chip couplers and long range couplers with our cross build processor and our Flamingo processor. And in 2025, we're combining these technologies with the Kookaburra processor and more importantly, we're breaking bottlenecks in scale for our future systems. On top of this, classical and quantum computing are coming together. We will extend quantum circuits with real-time classical computing. We call this dynamic circuits. We will mitigate and eventually correct errors in quantum circuits with near-time classical computing. We call this kizkit runtime. We will use elastic classical computing to knit quantum programs together to solve important problems. We call this quantum serverless. For many years, CPU centric supercomputers were the transactional workhorse of the world. And IBM has a proud history of developing these. In the last few years, we've seen the emergence of what we're calling AI centric supercomputers where CPUs and GPUs coexist for AI heavy workloads. Now, IBM is ushering in the age of quantum centric supercomputers where quantum resources, QPUs, CPUs and GPUs are woven together into a compute fabric. With this compute fabric, we will build the essential technology of the 21st century. We've got a lot of science to do, so we're signing off for now and we'll see you soon.