 Back in February, we shared our development roadmap and a detailed plan to frictionless cloud quantum computing using 1,000 qubits by the end of 2023. The good news is, we're still on course to deliver this, and as promised in May, we launched Qiskit runtime, and as we predicted, we demonstrated a performance increase of 100 times. But then, as we didn't predict, we actually saw a performance increase of 120 times. So we're really happy about this, but what makes us even more happy is that we think this marks a turning point in the development of quantum computing in one step closer to quantum advantage. You see, Qiskit runtime was our first step in solving a new set of really tough challenges related to quantum computing performance, and that's what I want to talk about today. As we bring quantum computing out of the lab and turn into a real business, we can't just look at the number of qubits we have, or the quality of those qubits. We have to look at the useful work those qubits can do. This we define as performance. Performance is measured by three key metrics, and we must keep improving all of them all the time. First we have scale. Increasing the number of qubits in our system is critical as it determines the size of the problems we can compute. Scale is directly related to the technology development, especially the hard tech. This is the reason we must continue investing in the hardware technologies to ensure their advances each year. The second is quality. This is a measure of how good our technology is at implementing quantum. This includes effects like material loss and other imperfections as well as control and readout errors. We currently measure this using quantum volume, which is a benchmark we introduced to the industry in 2017, and it has been widely adopted since. And finally we have speed. This is a measure of how fast our system can solve a problem. We need to be able to solve useful problems in a reasonable time, or we do not have a business. So we measure this by QPU speed, and we'll talk more about this later. Let's first talk about scale. Installing quantum computing processes requires advances in quantum hardware technology. In 2019, we released our 27 qubit Falcon processor, where we made a critical advancement in yield through the underlying qubit lattice arrangement and precise junction post fabrication process. In 2020, we created Hummingbird at 65 qubits, while demonstrating an efficient and scalable multiplexing of readouts for a reduction of output components in the system. This year, we're moving on to launching the IBM Eagle, a 127 qubit processor. To advance this, we are leveraging IBM's hard technology. This is something very deeply rooted in our history. The packaging we use for Eagle is adapted from our CMOS technology. On this foundation, we've made significant advancements in packaging techniques for superconducting qubits. These include 3D integration and multi-level wiring for higher density signal delivery. We next plan to scale to 433 qubits with our IBM Osprey processor in 2022. For this, we are inventing the next generation of a scalable I.O. that can deliver signals from room temperature to cryogenic temperatures. As I said before, the challenge is not just scaling, but rather developing key technologies that make scaling easier. This requires a significant investment in hard technology, and that's an area where IBM has notable strengths. Let's talk about quality. Back in 2018, we committed to the goal of doubling our quantum volume every year. We did this partly as a comparison to the rate of progress of classical computers, famously captured with Moore's Law. In fact, in the last year, we made two jumps, from 32 to 64 and then to 128. Using our exploratory systems, we're advancing the frontier of quantum computing technology every day. With our exploratory devices, we're driving critical improvements at the physical level by improving coherence, gate fidelity, and reducing crosstalk. It's not all physical. We're also working on the compiler level to squeeze more efficiencies from our circuits, so we are literally ahead of our own curve on quality. Let's now talk about speed in detail. As quantum computing evolves, we start to care more about useful work our systems can do in a reasonable time. We define it as the number of primitive circuits that can be processed in a second, similar to flops in classical computing, the number of floating point operations per second. Computing QPU speed is the key to practical quantum computing. Like classical computer programming, quantum programming requires running many circuits. A reasonable QPU speed will allow a user to incorporate quantum computing as a part of their workflows. QPU speed is determined by how fast you can repeatedly execute each circuit run. You can increase qubit speed a number of ways. Here are some examples. First off, your gates and readout must be fast so they can operate quickly. This is driven by the choice of the physical architecture. Superconducting qubits naturally offer fast gate operations and readout because we can engineer strong couplings between qubits and the readout resonators. Secondly, you need advanced control electronics that enable qubit reset for reuse. We implemented this reset capability by developing IBM electronics using our FGAs. Finally, we drive yet more QPU speed by reducing latencies and adding performance across the software stack by improved code generation, pulse orchestration, instrument loading and more efficient interactions with the cloud by developing the Qiskit runtime API. We are working on further improvements by making OpenCasm 3 end-to-end, including with it a new compilation process also improving how we use Qiskit runtime and speeding up the Qiskit SDK. We have demonstrated an initial proof of concept and we're going to show great results in the next few months. Our strategy is to improve every available driver of QPU speed. By increasing the gate speed, improving the readouts, using reset from the advanced control electronics and implementing Qasm 3 software and the Qiskit runtime, we were able to demonstrate this year 120 times improvement in the execution time of a chemistry simulation. Back in 2017, our work made the cover of nature showing how we could simulate lithium hydride molecules on quantum devices. Computing the binding curve with error mitigation requires running 4.8 billion quantum circuits. Now with Qiskit runtime and all the other improvements to QPU speed, we've just discussed this chemistry calculation takes us 7 hours. As you can see here, if we had a slower architecture that measured circuit repetition time in milliseconds rather than microseconds, the difference would be compounded. With 5 milliseconds, the circuit runtime becomes 290 days making quantum computing impractical. So for all these reasons, we believe higher QPU speeds will make quantum computing more practical and consumable. Especially in the current era of noisy quantum hardware that requires running many iterations and averaging of circuits. Let's look closer at our physical architecture choices. Here we compare different systems and why we choose superconducting qubits. Physical architectures such as iron traps have a higher quantum volume, in some cases higher than superconducting qubits. However, across all other metrics such as scale and QPU speed, they are much lower. In fact, the physical attributes of these devices makes them run around 1,000 times slower, meaning a program that takes a few days on our system would potentially take years to run on theirs. Superconducting qubits show a very balanced set of attributes that we see as optimal for building a quantum computing system. One last thing, we're not seeing performance as just something we fix with improvements to hardware. We see quantum resources evolving in a similar way to GPUs. They work best woven into a compute fabric of classical resources. Like the GPU, we believe that the future is a programming environment that is both classical and quantum. This simple animation shows how Qiskit runtime architectures can be consumed by the user via the cloud. So to summarize, in order for adoption to accelerate, we need to also focus on the useful work quantum computers can do. We define this as performance and is a combination of scale, quality and speed. In order to be successful, we need to constantly improve all three. Back to our roadmap. As of now, in 2021, we're right on course. In our drive for increased performance, we're now also focused on quantum processing speed. We took our first steps with the Qiskit runtime and we're confident we'll see even greater increases in the near future. The next step is our EGLE 127 qubit processor. More on this soon. Thank you.