 All right, everybody's still awake? Okay, good afternoon, everyone, and welcome to the final session of our 2023 IBM Quantum Summit, where we will tell you about how we're building towards quantum-centric supercomputing. By now, you've all zeroed in on the theme and word of the day, right, on firmly and loudly entering the era of quantum utility. Now, here we'd like to take you through our industry-defining roadmap and show you how we plan to go extending this, how we plan to extend this era of utility to continue to scale into the future. Now, Jay introduced our updated roadmap, looking forward 10 years to 2033, way back in the keynote at the start of the day. Now, there's a lot to go into here. A lot of birds, a split between innovation milestones and both hardware and software that feed into our development roadmap. But before I get into more of the details of the new, let's focus on where we've gotten to in the roadmap today. Look, I'll come right out and say it. Our roadmap is our mantra. It drives us internally, and I hope you all see that it builds the confidence that year after year, we'll hit our targets. Zooming into this a bit here with our processor innovations, the first phase of our roadmap critically focused on scale. Year after year, we focused on working on new technologies that push these capabilities of scale, ultimately culminating with Condor this year, breaking the 1,000 qubit barrier. Now, it was just two years ago that we had broken the 100 qubit barrier with Eagle. And now I'm excited to say that, yes, we've gone beyond the 1,000 qubits. What innovations went into Condor? It might be better to ask what innovations didn't. Across the board, from the chip to the packaging, to the wiring and the system and the cryogenics, we pushed the limits of scale and yield to get to 1,121 superconducting qubits. This meant layout enhancements for optimally placing qubit routing, pairing that with predictive simulations at the scale of 1,000 qubits. Now, anyone who's actually worked in the field will know, modeling these superconducting devices is challenging, let alone getting it right at this scale of 1,000 qubits. Now, just as an example, we're able to yield accurate predictions of device parameters that are really relevant, such as the frequencies of the readout resonators as well as coupling strengths. Condor also incorporates an evolution in our advanced CMOS-based packaging techniques. Now, when we first introduced multi-level wiring with through substrate vias to break 100 qubits with Eagle, we already said then that it would be straightforward to further push our decades of semiconductor know-how at IBM to evolve quickly. And we've done exactly that. Now, going to five levels of superconducting metal layers. With Condor, despite the over 1,000 qubits, we've managed to achieve a 2.5x qubit count with just the 70% increase in the chip size of Osprey. We've also pushed on our laminated board technology to provide the wiring up of 1,121 qubits. Now, Osprey was already quite large last year, but Condor is absolutely massive, and here it is. It's about the size of that take-home book that quantum decade book that you are seeing. And yes, it is that large, right? And we cool that down inside of a standard dilution refrigerator, that whole thing fits. But if anything, the size tells us something. The limiting item of why that board there is so large is actually the connectors. And if anything, Condor helps motivate to us to keep scaling down these high-density microwave connectors as we progress further in our roadmap over time. Now, last year, we also deduced high-density flex wiring with Osprey, and I'm glad to say we've continued to evolve that with Condor. Even at 1,121 qubits, we're able to fully load everything into a standard commercial dilution refrigerator and cool it down. And years ago, this would have been unthinkable. We have over one mile of signal trace. You heard that from Rajiv earlier today. One mile with over 1,700 signals connected. But more importantly, through the use of these flex cables, we're able to actually use only 200 or so discrete components, which would have been 1,800 parts just a few years ago. Our engineers are very thankful to not have the wire of 1,800 individual pieces. And yes, Condor works. We can yield 1,121 qubits on a chip, cooling a whole system loaded to the brim, and the coherences are on par with our Osprey performance. And so that's it. We have achieved 1,000 qubits. Now, we aren't taking it the whole way and turning this into a product. But the innovations here have taught us how to push scale and yield to the limit. And now we know where things can break, what things still need to improve. And although Condor has been truly a feat, we're just that much more excited about its sister processor of the year, Heron. And you heard about it in the keynote from Jay and Rajiv, but let me invite Kristi Teiberg to tell you more about Heron and what it means for the rest of our movement. Thank you, Jerry. And now I'd like to take you through the next steps in the innovation roadmap starting with Heron. The Heron system enables us to achieve even further improvement in quality of large, 100-plus qubit systems, while also serving as a platform to drive scale through modularity. This year, we have brought up our first-generation Heron system, containing 133 qubits, and incorporating a new tunable coupler architecture. The Heron system enables us to achieve even further improvement in modular architecture. The Heron system builds upon technology developed for Eagle, Osprey, and Condor, including superconducting bumps through substrate vias and multiple levels of wiring. It has a similar IO count compared to the 433 qubit Osprey chip and leverages the flex signal technology that Jerry just highlighted for Condor. We've demonstrated continued improvement in quality over the past six years, with larger and more complex systems. Here we can see all of the systems that we've deployed over the last six years and the median error rate for each system, where the lower error rate corresponds to increased quality. We are now demonstrating median error rates in the range of 3 times 10 to the minus 3 for the EGRET system Prague and the Heron system Monte Carlo. Both EGRET and Heron incorporate tunable couplers enabling reduced error rate. Taking a closer look at the error rate, we compare the average gate error of the best Eagle system to the first Heron system. We see a few important differences. First, the median gate error of Heron has close to a 3x improvement compared to Eagle, a system already demonstrating utility. Second, the delta between the isolated and the layered gate error is significantly reduced in Heron. As you can see by the decreased separation in the curves, this represents an improvement in crosstalk. Finally, there are some higher error gates. This is an important area of focus for further improvement. Through research test devices, we have demonstrated new techniques to improve the coherence of the qubit, shifting the distribution to longer coherence times and eliminating the low coherence qubits that lead to higher gate error. We're incorporating these changes into the future revisions of Heron to reduce error and improve the quality of the system. Another way to look at the improvements in Heron is to use the new benchmark error per layered gate. This shows the average error of qubits, average error of circuits run with structured layers, where a structured layer is a layered circuit with a set of two qubit gates covering all the qubits of a specific chain length. At shorter chain lengths, the highest quality or lowest error qubits are selected for the circuit, resulting in a relatively low error per layered gate. With longer chains, more qubits are used, causing higher error gates to be incorporated. As I mentioned previously, we were working on new techniques to improve the quality of the worst qubits in the system to achieve lower error per layered gate for longer chains. Heron shows a significant improvement in error per layered gate compared to our best Eagle processor at all chain lengths tested. This highlights our continued progress as we transition to Heron, building upon our learning from Eagle. We were really excited about these advancements in quality in Heron and looking forward to continued improvements towards our 5,000 gate demonstrations next year. And here is the first generation Heron chip that you saw this morning in the keynote. Starting today, the first 133 qubit Heron chip, Heron System IBM Torino, is available for exploratory access. As we move into 2024, multiple Heron systems will be made available as part of the 100 plus qubit utility scale fleet. Heron is not only an important advancement in quality, it also serves as a platform to drive scale through modularity. Flamingo and Crossbill are both extending the Heron platform through modular coupling. In the Flamingo system, we start from the Heron platform, add a set of L-Coupler gates, and connect these gates between different chips using L-Coupler cables. These cables can be used to connect chips up to about one meter apart, allowing increased space and flexibility for wiring and other fridge components. Flamingo can be scaled from two to seven chips interconnected with these L-Coupler cables. In parallel, we are developing M-Coupler technology to create high-fidelity gates between qubit chiplets within a multi-chip module. This includes development of M-Coupler bus, combined with M-Coupler packaging around a Heron platform. The technology can be leveraged to increase the number of interconnected qubits while balancing chip size and yield considerations. We believe both L-Couplers and M-Couplers will be important building blocks in our future. This year, we have developed key technologies needed for both Flamingo and Crossbill. Here, I'll focus on one example that we'll feed into Flamingo. We have developed test structures to measure the quality factor of the one meter aluminum coax cable planned for the L-Coupler in Flamingo. The results show that we can achieve an internal quality factor of the cable exceeding the target for 95% state transfer fidelity between chips. This gives us confidence in the Flamingo design. And now I'll hand it back over to Jerry to take us through some of the next steps beyond Flamingo and Crossbill. Thank you. That's really exciting about Heron. And thank you, Christy. So you've heard from Christy now that Heron is joining our fleet of utility scale processors. But more than that, we're also targeting reaching that 5,000 gate number with it by the end of next year. Now, this is what we previously called the 100 by 100 challenge. Now, you've heard it today at length from Sarah, Abhinav, and many of our esteemed external users are already doing at a few thousand gates. So looking ahead, Heron and Flamingo and subsequent improvements to quality in terms of more gates will power our development roadmap for the near future. Combined with our mitigation techniques in our runtime to further fuel utility. But where do we go from there, though? How do we extend utility to further scale? The key is error correction. But more importantly, we start a continuous path where everything that our users and our clients are doing with the utility scale machines today will translate. And it just so happens that in 2029, we jump a ridiculous number of gates from 15,000 to 100 million over 200 qubits and then subsequently 1 billion gates in 2033 over 2,000 qubits. How are we going to do this? Let me invite Andrew Cross to tell you more. Thank you, Jerry. So for many years, the surface code has been nearly synonymous with quantum error correction. In one form or another, it's been the protocol to demonstrate for good reason. Its code and check qubits, pictured here, can be arranged on a square lattice. Qubits only need to interact with their neighbors and it tolerates a relatively high error rate. However, the surface code has a well-known drawback. It suffers from low rate. To correct a sufficiently large number of errors, one qubit is ultimately encoded into a very large number of physical qubits. This fact makes resource estimates for a large surface code quantum computer look especially daunting. For this reason, we turned our attention to a different type of code. A high-rate, low-density parity check code or LDPC code. And surprisingly, we found examples that perform even better than expected. The example shown here is one that we affectionately call the gross code. You might think that something gross is unappealing, but a gross is an aggregate of 12 dozen things. And this code has a small 12 dozen or 144 qubit code block that encodes 12 protected qubits. Remarkably, the error correction protocol can be implemented with six connections per qubit. And those connections can be made on just two planar layers. Compared to a surface code block, this code uses as little as one fourteenth the number of qubits. The gross code is more complex than the surface code, but it's not that difficult to visualize the connectivity. First, imagine a square lattice of qubits like the surface code. Next, add two different kinds of long-range on-chip connections between certain qubits. These long-range connections repeat in a regular way. Now, glue the left and right edges together and glue the top and bottom edges together. This forms a donut or torus. And finally, squash this torus flat and fold it in half. The resulting arrangement of qubits is one possible abstract device layout with connections on two wiring layers. LDPC codes such as the gross code have significantly improved overhead and yet still have comparable error-correcting performance to the surface code. Here you see simulations of the error probability of the gross code side by side with surface code simulations. Over the relevant range of physical error probabilities, the gross code and the 12 qubit surface code have similar block error probabilities. However, the gross code is just a fraction of the size. For more details, please see our team's recent pre-print which is shown on the right here. These results have shaped our new innovation roadmap, where we set a goal to develop Kookaburra, a system that implements a memory block using a quantum LDPC code. For Kookaburra, we need to demonstrate two crucial technologies. First, we need a coupler that allows each qubit to interact with six others as shown on the left. Second, we need an on-chip coupler called a C coupler that connects more distant qubits on a second bus layer. These technologies come together to enable shallow circuits that measure the checks of a high-rate LDPC code. To go further toward a modular, fault-tolerant architecture, we need new innovations, theoretical insights, and hardware developments. With Kookaburra, we're developing the C coupler technology I already mentioned. Flamingo and Crossbill develop the M and L coupler technology we need to connect chips and modules together, as you heard from Christie. These technologies build toward an error-corrected quantum computer, first with Kaka2, which demonstrates multiple connected code block modules, and then with Starling, which incorporates many modules and hundreds of encoded qubits. Now I'll turn it back to Jerry. Wow, there's a lot of birds, right? Thanks, Andrew. Now, both throughout the era of utility as well as into this phase will require a very tight interlock between classical computational resources and quantum. Just last year, we introduced this concept of quantum-centric supercomputing, and I'd like to invite Antonio Corcolis to give you more ideas about that vision. Thank you, Jerry. So we have seen how this era of quantum utility, along with progress in quantum hardware, packaging, and quantum error correction will start showing the path to scaling quantum. We first introduced quantum-centric supercomputing at last year's IBM Quantum Summit, and this year I would like to revisit the concept with clarity and tell you exactly what do we mean when we say quantum-centric supercomputing, and how can we untap its full potential as we progress in our development roadmap. And I would like to do that by adding a few myths. So myth number one, the quantum algorithm has to run in a single circuit. Think quantum phase estimation ensures algorithm where you are given a circuit and you run it once, perhaps a few times and you get your answer. And yes, I acknowledge that the single circuit is universal, but it's also a construct that captures the computability of a problem. In practice, single quantum circuit is not the most practical way of running quantum algorithms. We have seen a few examples today in our collaboration with Argon on propagation of observables and with BASQ on a Hamiltonian multi-formal for Hamiltonian simulation. So the future of quantum algorithms will rely on multiple circuits running together. And we call this circuit knitting. A set of techniques leveraging multiple circuits and classical post-processing. And speaking of classical, here is my second myth. Quantum computing is an appendix to HPC. Look, quantum is going to redefine HPC. The meaning of HPC is not static, but it means today it's not the same that it meant in the past and not the same that it will mean the future. Let's look at it through the lens of time. So, we have CPUs, then we had vector computers in the second half of the 1970s, the cry one system. That was the beginning of modern supercomputing. Then we have massively parallel computing using distributed memory towards the end of the 1980s. The next revolution was enabled by the cost advantage of standard microprocessors. And by the mid-2000s, the majority of the systems in the top 500 were Beowulf clusters. Since then, GPUs have emerged as the new power platform for HPC. And now we have quantum processing units, or QPUs, which are on a clear track to redefine HPC once more what we call quantum centric supercomputing. These are integrated quantum and classical systems working together in parallelized workloads to run computations beyond what was possible before. So, what kind of problems can quantum centric supercomputing tackle? Here's my third myth. All quantum algorithms have already been discovered and optimized. In 1994, Peter Shore formulated, perhaps, the most famous quantum algorithm and efficient way to find the prime factors of an integer. And then the algorithm stayed dormant for about 30 years. And this year, a few months ago, a quick succession of two papers showed how to run shores using multiple circuits, classical post-processing and a few instances of those circuits. And this is a wonderful example of how quantum centric supercomputing will work. We are not building quantum centric supercomputing to run single circuits. Let's look at a particular instance of this algorithm. These are the resources that you need to run shores algorithm to break RSA 2048 encryption. About 6,000 qubits, 3 billion gates, one single circuit. This is the new result. Four orders of magnitude of improvement in circuit size. In the words of Peter Shore himself, there's still probably lots of other quantum algorithms to be found. But if you focus your quantum algorithm search on single circuits, you end up on a landscape of single circuits. These are some particular problems of interest in the areas of science, industry, and math and the resources that are needed to run them. We, along with quantum users, application experts, algorithm researchers, we are going to shift this landscape. We are going to shift this landscape. Thank you. We are going to start doing that by displaying a fourth myth. To come up with quantum algorithms, you don't need hardware. Well, maybe. If you are not too worried about the applicability of the practicality of your algorithm, maybe you don't need hardware. But there are many examples of the contrary, even in classical computing. Turbo codes were initially found to be a solution to the investigation of many other ways of interactive signal processing. Ken Wilson, numerical renomeration group, that was later proven to be a solution in polynomial time for the energy of gap 1D systems. The simplest algorithm for linear programming, this had a proven exponential run time. And people saw that the numerics kept converging faster. And it was later by introducing the theoretical notion of a smooth complexity that it was proven that the exponential run time was due to an artifact because the problem had been engineered for a particular instance. And there are many more examples. So playing with hardware is not only important. It is critical. So as we progress in our utility of hardware roadmap, it is time for algorithm research to take a step forward. Now, with different more practical algorithmic approaches with integrated quantum and classical systems, and with continued interaction with quantum hardware as it matures, we will turn this landscape of single circuits into a landscape of quantum-centric supercomputing. Now, you have heard our plans today to realize the BlueJay system of the University of Tokyo and the University of Chicago. As we announced in May this year, we are teaming up with the University of Tokyo and the University of Chicago towards the realization of BlueJay. Both institutions will touch on all areas of research, but we will focus with the University of Tokyo on applications and componentry and with the University of Chicago on communication and middleware for quantum. So we have offered you a view today of our updated roadmap and here I've given you a flavor for how quantum-centric supercomputing will come together. Throughout this path, we will keep engaging with the community to bring them the best that quantum computing can offer as a central part of the best supercomputing systems in the world. And with this, I will pass it back to Jerry. Thank you. Wow. Antonio Corco is our resident myth buster. Now, to reach quantum-centric supercomputing and continue to scale through this era of utility, we need to evolve our system architecture. Now, back in 2019, we first introduced IBM Quantum System 1 and it represented a first quantum computing system architecture for the time. But now, we need System 2, a modular quantum computing architecture. And let me invite David Bryan to show you the full story. David? Hey there. How are you doing? How's everybody holding up? Okay, we're on the home stretch. We've got a few more minutes so we can all go and have a drink or something. Okay. Thank you, Jerry. So, last year, we stood on stage and shared our conceptual designs for Quantum System 2 and we promised then that we would have a live working system by this year's summit. Here again. And yes, Quantum System 2 is live. It's operational. And it's at TJ Watson Research Center in York-San Heights. And I think we need some kind of energy here, some kind of round of applause. Now, from a design perspective, it resembles our original concepts actually very closely. In fact, it's hard to tell the difference between the two. So this is the concept render we shared last year. And this is what we did last week. So it's one of those rare situations where we didn't lose a lot in translation going from concepts to reality. So now we have a Quantum System that houses a three-tiered chandelier held within a cryostat maintaining a near-perfect vacuum at temperatures colder than deep space. The chandelier currently holds three Heron processors. But as you know from today's sessions, we have extended roadmap now to 2033. And we have a lot of new processors to come. So we have Heron, Flamingo, Starling, Bluejay. So Quantum System 2 is designed to be flexible enough to house all of the processors that we are going to be creating for at least the next five years in multiple configurations. In addition to the cryogenic infrastructure supporting multiple processors, we also need to control the qubits on the processors. So the control systems that surround the cryostat are Generation 3. So we tuneable couple support. Plus they're extensible. So as we increase the number of physical qubits on the inside, we can extend the control systems on the outside. I find this rather poetic that all of this complex engineering and machinery surrounding the cryostat is focused on controlling a bunch of paired electrons. And these electrons are being put to work as we speak to help us better understand condensed matter and high-energy physics. Tiny parts of the universe are helping us understand the other tiny parts of the universe. Quantum System 2 isn't just a standalone system. It was designed to be modular so multiple Quantum System 2s could be connected together to create much larger systems. In fact, our vision for the 100 million gate Starling and the billion gate Bluejay are currently based on Quantum System 2 architecture. But above all, following in the footsteps of Quantum System 1, we wanted Quantum System 2 to be beautiful and iconic, driving an emotional connection through the power of design. Quantum Computing is now part of contemporary culture and the Quantum Chandelier is now a science fiction character. Ironically, the most recognized part of the Quantum Computer is the one part you don't see in a working system. So we have to create beauty outside of the chandelier. And we do this with simple geometries, materials, and above all, light. Introducing the IBM Quantum System 2 the world's first modular utility scale Quantum Computer System. Quantum System 2 was designed to tackle complex problems that lie far beyond the reach of today's classical supercomputers. It stands 15 feet tall and operates in a near perfect vacuum at temperatures colder than deep space. Initially powered by 3133 qubit-heron processors, Quantum System 2 is fully upgradable to the growing line of utility scale QPUs that IBM will be releasing over the next five years. This is the world's first modular utility scale Quantum System. So in addition to talking about physical qubits, we now need to be concerned with circuit size. By the end of 2024, each of the three quantum-heron processors in Quantum System 2 will be able to process a remarkable 5,000 operations in a single quantum circuit. But the real triumph of Quantum System 2 is its modular design. Our new quantum coupling technology will allow multiple Quantum System 2s to connect together to create systems capable of running 100 million operations in a single quantum circuit. Continuing down this path we plan to realize a system capable of running 1 billion operations in a single quantum circuit by 2033. That's why we call Quantum System 2 the building block of quantum-centric supercomputing. Today, our clients and partners are already using our 100-plus qubit systems to advance science, surpassing brute-force methods deployed on the world's most powerful classical supercomputers. And soon, they expect quantum applications offering unprecedented business value. Our mission is to bring useful quantum computing to the world. And it starts with Quantum System 2. So, we did the work of designing Quantum System 2 with our excellent design partners, Map Universal, who should be in the audience somewhere, and an Italian company, Gopien, who specialized in high-end displays for museums. Working with them, we carefully selected materials and finishes that played with reflections and light, but also revealed the technical details within. In the words of T.J. Watson, good design is good business. But good design doesn't always mean going with super high-end materials or what our lead engineer calls unobtainium. There is a role for a more utilitarian design for future installations like data centers, where showcasing is less of a priority. So, it's now a question of function over form. So, we lose the glass partitions, but we leave the beautiful geometries and the reflective properties of the materials intact. This new data center version of Quantum System 2 is rugged and it's stripped down, but it retains the character of the original showcase model. And the reason we're sharing this design with you is because we're building a new Quantum Data Center to house multiple Quantum System 2s, and we'd like to share our plans with you. We see this as the data center of the future, introducing data center B. Welcome to data center B. We already have a data center A in Poughkeepsie, New York, where we currently house our fleet of eight utility scale systems. This will be data center B, where we also plan for Poughkeepsie, where we will initially house eight Quantum System 2s. Here, we've kept the space as wide open as possible. And we're also feeding infrastructure down from the ceilings and what we call our inverted data center approach. There are no war partitions at all. We felt that this gives a greater sense of freedom and visibility for the systems. I particularly love this. So these are compression units, but rather than hide them away, we turn them into a feature. There's no reason why infrastructure has to be ugly. Planning is already underway and building will start in the second half of 2024, so I look forward to sharing our progress with you at the next year's event. Now the last four slides I shared with you are just CGI renders. That's all they are. It's just a vision of what we intend to build. But bear in mind Quantum System 2 was also just a render this time last year. So now we have the first building blocks for Quantum Centric supercomputing. We are providing the world with a different form of computation. And everyone in this room has played a part in getting to this point. What we build is up to us all. Now I'll hand back to Jerry. Thank you. Thanks David. Wow. So that's really this session that we have for you today. Condor is here. Heron is here. Flamingo is on its way. We showed you our extended road map for quality all the way towards Starling and Lujay. System 2 is going to take us there towards Quantum Centric supercomputing.