 I want to start by sharing an observation that I find inspiring. We are all explorers. Every day, businesses are required to explore physical and digital markets using new and constantly evolving technologies. And they must do this with imperfect information and constantly shifting societal, political, and environmental landscapes. Exploration can be accelerating, but also daunting and even perilous. But there's one trait that all great explorers share. They understand the importance of navigation. Navigation is a skill that requires both data and tools. And the quintessential example of this is the map. However, there's a dilemma. What happens when you need to map something not yet discovered? How does one map the future? This is something we do every day at IBM. Let me show you how we develop technology roadmaps that create new industries and redefine existing technologies. And there is no better example of this than our work in quantum computing. We've been exploring the powers of quantum computing for many years. And in that time, we've chartered some amazing territories. In 2020, we published the IBM Quantum Roadmap, illustrating in detail the progress needed to unleash the power of quantum computing for business and science. With that roadmap, we shared our vision through 2023. And it has been an incredible success. But now we're ready to go further, extending this vision through 2025. The road to quantum advantage is being built by increasing scale, quality, and speed for quantum computing. And it all must be integrated to deliver a frictionless experience, which we realize through the Qiskit runtime. Let's start with scale. The roadmap to scale is a story of modularity in both the hardware and the software that controls it. Last year, with our Eagle processor, we were the first to cross the 100 qubit barrier. As impressive as this is, I'm going to tell you how we will build quantum data centers with thousands and even hundreds of thousands of qubits working together. Essentially, how we will build the first quantum-centric supercomputers. This year, we will release Osprey, a 433 qubit quantum processor that breaks new ground in our ability to control and measure quantum information. And next year, we will introduce Condor, the world's first quantum processor with over 1,000 qubits. And if that isn't exciting enough, we will also enable parallelization of quantum computations. This is huge! We will be connecting multiple quantum processors on the same control hardware, each with their own controller, and introducing threaded runtimes. What this means is that Qiskit runtime will be able to split a program into several tasks and automatically distribute them among multiple processors, further increasing the scale of our quantum computations. As we move through 2024, we will continue to leverage our expertise in systems and semiconductors to develop truly modular and scalable quantum systems using high bandwidth, high fidelity links. Disconnected processors will operate like a single large quantum processing unit, scaling to computational spaces and like anything we've built before. We'll begin with modules of quantum processors connected chip-to-chip. And here is where our quantum roadmap gets amazing. In 2025, we will introduce longer-range quantum communication in a system with over 4,000 qubits. But crucially, the architecture will be designed to scale to tens of thousands of qubits, bringing the era of quantum-centric supercomputers. And while the scale of computational power is important, we are equally focused on quality and speed. At the fundamental level, we need to improve coherence. With our Eagle processor specifically, we can just achieve another breakthrough in coherence that leads to a 3x improvement over our current production devices. We are actively developing error suppression and mitigation techniques that will offer a scalable method to deal with errors in hardware. This is the holy grail of quantum computing and will be key to achieving computational advantage. Speed is also fundamental, and it is the reason why we have chosen superconducting qubits. Their base operations are a thousand times faster than other technologies. Furthermore, with special build control electronics, we are deploying dynamic circuits that will bring real-time classical processing to quantum computations, helping us compute more with less. What an amazing journey. These advancements in hardware, software, and theory are establishing the era of quantum-centric supercomputers. Where quantum, classical, and AI processing units will work together to help solve some incredibly valuable problems. This is the kind of work that IBM is made to do. We are pioneering a new industry, and we have charted a path to deliver quantum advantage for business and science. And while it's hard to predict a date and telling you it is going to happen earlier than you expect, in fact, we're already building IBM Quantum System 2, the machine that will run the next generation of Qiskit runtime and bring all of this to the world. Let us examine another roadmap that is guiding our work in the incredibly dynamic area of AI. A recent exciting advancement in AI is something called foundation models, which is proving to be a game-changer for the field and for enterprises. Let me explain how they work. Foundation models are large-parameter neural networks trained from immense unlabeled datasets that are adapted to produce other models for a variety of specific tasks. Think about us. When we learn a new task, we continuously use that understanding to perform future tasks, even seemingly unrelated ones. In AI, we call this transfer learning, and for decades, we attempted to imitate this by training AI models to learn representations of the world using annotated data. Typically, humans label the data, something we call supervised learning. But creating a model big enough to work required hundreds of thousands of human annotated sentences. That could cost hundreds of thousands of dollars to build and was simply unsustainable. But foundation models finally break through this format, learning from unlabeled data using a process called self-supervision. Self-supervision works by having the AI model predict a missing part of an input. For example, a missing word in a sentence based on the context of the other words in the sentence. The AI model learns how to represent the data by doing this again and again over billions of self-supervised inputs. In short, foundation models have both better performance and deliver greater AI productivity. And our AI roadmap shows how we're bringing these powerful models to your business. This year, we will enable the frictionless development of foundation models in the hybrid cloud for the development of enterprise use cases. In 2023, we will help accelerate the adoption and deployment of foundation models by instituting trust guardrails and bringing in elements of causality and reasoning to enable AI governance. By 2024, we will achieve a 5x cost savings in building these models, in addition to a 5x savings in energy usage to the design of next-generation hardware and software. The momentum foundation models are bringing to the adoption and scale of AI is remarkable. Today, they dominate natural language processing and already in Watson's language portfolio, powering over 19 IBM products. It took seven years for Watson to cover 12 languages. However, after transitioning to foundation models, 13 more were added in just one year. But natural language processing is just the beginning. If you squint, you realize that languages are everywhere. Foundation models are being developed to learn the language of machines using time series data from IoT sensors which continue to permeate our world. They're being trained on chat and web click data for use in customer care. They're learning representations of code to modernize, test, and secure IT systems. Foundation models are also igniting AI applications in areas like molecular modeling, being used to raise the rate of discovery of cutting-edge antimicrobials for healthcare by as much as 30 times. Foundation models are beginning to deliver on their broad promise, and it will have a very significant impact in business. In summary, IBM's AI technology roadmap illustrates how we're driving adoption, trust, and efficiency to deploy this new form of data representation and models so that our partners and clients can explore new and existing business opportunities like never before. I think you can see the power and importance of mapping the future of technology and how it can be a tool for business. Individually, these maps clearly and concisely paint a vision of the future, but they do even more. The true power of these roadmaps is when they intersect. A great example of this is the IBM Z16 that our systems team recently unveiled, one of the most sophisticated machines ever built. The path to IBM Z16 started in 2015 when our semiconductor team, using UV technology, created the world's first 7-nanometer chip. Jumping forward a few years, our AI hardware team unveiled Talon, a new 7-nanometer CPU chip, and IBM's first processor with on-chip acceleration for AI inferencing. This was remarkable because it embedded AI in the heart of the chip, allowing it to detect fraud with millisecond latencies within the window of a transaction. A single system can run 300 billion AI inference models a day. Finally, developments in lattice-based cryptography from our security team created quantum-safe cryptographic protocols that were incorporated across several layers of Z16, making them secure against threats enabled by future quantum computers, threats to which modern security measures are vulnerable. This example shows how our work in semiconductors, AI and security combine to create the most powerful and energy-efficient server for transactions on the planet. An incredibly complex process that took years and dozens of disciplines to make a reality, and it is our roadmaps that got us there. And now we want to share this first set of roadmaps with you. We've created the IBM Research Technology Atlas. It's a navigation tool for explorers. It contains our ambitious technology roadmaps in semiconductors, AI, hybrid cloud, security, quantum computing, and systems. Each roadmap projects three years into the future in detail and visualizes how they intersect to enable future capabilities. We believe it's a tool that will help you map your future goals to the technologies that will make them possible. Mapping the future of technology is hard but important work. Let's do it together. Work with us to build joint strategic roadmaps, to guide explorations, to navigate complexity, and to unleash your full potential. Let's create our maps of the future.