 We are headed into the research headquarters of IBM in Yorktown Heights, New York. I really love this place. IBM Research is home to 3,000 scientists and researchers with laboratories around the world who deeply believe in the power of the scientific method to invent what's next for our company, for our clients, and for society. Over the last couple of decades, I've gotten to visit and engage with the teams at many of the great labs around the world, and what I must confess my bias, I really believe there is no other place like IBM Research. The truth is that technology can be considered any device or process that makes something easier or better or faster. But technology, or what we've come to know as tech, is an often abused word that has many flavors. Some of it is relatively trivial, nothing more than a temporary shiny new object. But then there is tech that is not just hard, but enduring and deep in its implications. This hard tech may be behind the scenes of the highly intuitive user interfaces of our devices, but make no mistake about it. About it, the world would simply stop, and true revolutions in technology would not take place. Every company needs to engage with technology to stay relevant. And today, we are going to talk about what kind of technology company IBM is, and why IBM Research continues to attract some of the brightest minds in the world to tackle the hardest problems in computing. It is a self-selecting group. It is a calling for those that want to take on what looks to be almost impossible. I'm Dario Gil, the director of IBM Research, and I want to take you on a journey into the world of hard tech at IBM. At IBM, we invest in and create the fundamental technologies that power the world of computing. Many of the modern digital technologies that you use today, running businesses, banks, airlines, digital commerce in our devices, originated in innovations created by IBM, many of them right here in this building. It would be impossible to tell the story of computing without IBM. And it would be equally impossible to understand the next computing revolution without the technologies that IBM Research is now pioneering. These innovations don't happen in a vacuum. They are built on hard tech challenges and fundamental materials advancements from the atomic level all the way out to the edge of computing. Quite simply, this is what IBM does and who we are at our core. Hard tech computing is IBM's soul. Now, I'd like to give you all a glimpse of the innovations that we're building that will power computing today and for decades to come. First, we will look about what it takes to push the absolute limits of computing performance. You're going to be the first ones to see the world's first two nanometer node computer chip. Next, we'll check in with our quantum computing team to talk about how we have created machines that calculate as nature does. We will show you the latest advancements in quantum computing hardware and software as we work towards building 1,000 qubit plus machine in the next two years and at the same time make it as easy to program these powerful machines as it is to use today's programming tools. Our next stop will take us deep into the world of languages and AI. We'll see how natural language processing is being applied to not just human language but to the language of our machines, the world of code and to the language of industries such as chemistry. And lastly, we will survey the world's largest computer, the cloud. We'll learn how the future of business will be made possible through a visionary serverless open architecture, aim at making the massively heterogeneous and distributed computers that make up the hybrid cloud ultimately work as if it was a single infinitely powerful computer. It is an exciting journey into our hard tech projects that will change how you think about computing and the coming era of accelerated discovery. Let's get started. We are in Albany, New York at the most advanced collaborative semiconductor research facility in the world. This kind of facility is the products of billions of dollars of investment and over 20 years of dedicated work. It has been built on a highly successful public private model designed to support an ecosystem of world leading semiconductor suppliers and manufacturers. It is where IBM researchers have done some globally altering scientific and technological breakthroughs in nanotech and semiconductor research. Many of you may not know this, but my early research at IBM was in the field of nanotechnology and lithography. You probably have heard about the field of nanotechnology. The field of lithography is a bit more obscure, but it is at the heart of why computing gets better and cheaper decade after decade. Moukesh Kare, one of the most respected leaders in the field of semiconductor research, is going to share how we're building the chips of the future. We're going to get a chance to talk about one of my favorite subjects for hard tech, semiconductors. Maybe we could start by discussing how our chips made. Well, Dario, it may not be immediately obvious, but transistors, the basic unit of computation on a wafer, is printed. Wafers are sawed out of cylinders of pure crystalline silicon, then polished to mirror-like finish. With lithography, we are able to print and ultimately etch billions of transistors onto a wafer with atomic precision. A high-energy laser fires on a microscopic droplet of molten tin and turns it into plasma, emitting extreme ultraviolet light, which then is focused into a beam. We reflect beam of a masked pattern that contains the complex design of the circuitry of the chip we want to print. That pattern of light is then shrunk through an array of atomically precise reflective mirrors, finally casting onto the silicon wafer at a microscopic level. This light exposure burns the pattern into the photoresist, and after it is developed, it forms a relief pattern that can be used to etch the desired structures into the silicon. The wafer gets processed subsequently and cleaned to remove the resist. This process gets repeated layer after layer, as many as a hundred times, and over days and weeks, we get to ultimately create a fully functioning chip with transistor dimensions at a nanometer level. Imagine we're stepping down from our typical field of view by powers of 10. At 10 centimeters, our field of view crops the edges of a device like our phone. We see a printed circuit board as we move down to 1 centimeter, entering the chip and its 50 billion transistors. At a millimeter, we're now roughly to the point at which the industry had scaled 20 years ago, 30 million transistors on a single chip. Stepping down now to 100 microns, we see the largest elements of the integrated circuit. Just beyond the scale in our biology would be at the width of a single strand of human hair. Now at 10 microns, you'll see a portion of the chip with a large array of devices performing core functions of the chip. Passing beyond the nucleus of our cells at 1 micron, we continue to travel down deeper till reaching 10 nanometers, the scale of the very fabric of our makeup. So, Moccas, there's a lot going on in this fab. Tell me about it. We use lithography technology to build the basic device for computation called transistors. Working together with our talented research team and many partners, we have pioneered a new transistor structure. We at IBM call this nanosheet, which has become the foundation for every chip manufacturer's future chip generation. The nanosheet structure is formed by vertically stacking multiple layers of silicon sheet channels around 5 nanometers in thickness, which is about two DNA molecules. In 2015, we were the first to create the world's first 7 nanometer test chip. A few years later, in 2017, we did it again, creating the 5 nanometer test chip where we first introduced the nanosheet technology to the world. And now, I'm very proud and excited to say that we've done it again, creating the world's first 2 nanometer note chip. And we did it right here in this facility. There are almost 10 times more transistors on this wafer than the number of trees in the entire world. Scaling to this 2 nanometer framework will equate to 45% performance improvement over today's 7 nanometer chips using the same amount of power or a 75% power savings at the same performance level. Extraordinary, Mokesh. It's hard to imagine that this technology could get any better, but I'm sure you guys are going to try. Yes, and we like the challenge. These continued technology advances ensure an enduring platform for both our own hardware and systems, but also the entire technology ecosystem. While the commercial availability of 2 nanometer processors is still several years away, the IBM Research Innovation Pipeline gets directly commercialized through our hardware platforms. In fact, IBM's first commercialized 7 nanometer processor, based on our 2015 innovation, will appear later this year in IBM Power 10 based systems. And next, we will show you how these advancements are bolstering the capabilities of our system Z. Our work here underscores the importance of advancing semiconductor chip design and performance across all modern computing architectures. And investing in these innovations is also critical for our partners, such as Intel and Samsung. It is also vital to the secure chip supply chains of industry, from IT to car makers and to the success and security of our nations. Our 2 nanometer breakthrough will create advanced nodes that give hardware designers a more powerful canvas to create specialized tech. 2 nanometers is now the foundation for researchers to explore the future of hardware, including AI hardware, that can drive greater performance across everything we do. But we face a challenge. Today's AI is incredibly power hungry. AI's rapid progress has fueled an insatiable demand for computing power for ever larger neural network models on ever-growing data sets. We have to figure out new methods for running these large AI models on today's most advanced machines sufficiently. In other words, it requires hard tech. IBM fellow Donna Dilenberger has been innovating in systems for years. And it's here to tell us more about what makes system Z such an amazing machine. Hey, Donna, how are you? Good to see you. Hi, Dariel. So you're a world-class systems researcher. So tell us, what makes IBM Z the system so special? Z is known for its speed and its scalability. The Z15 was built with 9.4 billion transistors. With that type of power, we can run open shift workloads at 4.7x higher performance, 4.4x lower latency, with 34% less cost. It also runs 1 trillion transactions a day. It does that with half the energy that other servers require. It's the greenest server in the planet. It also has the highest availability. Z stands for zero downtime. There are clients that have never had an unplanned outage in years. While a Z server is running, you could pull out its memory, its processors, its IO drawers, and it will still run. Yeah, what a marvel of a machine, right, in terms of transaction processing. But what about the world of AI? What can it do for AI? AI could be done in Z with millisecond response time. You have these transactions just being thrown at the server, 20,000, 30,000 transactions a second. No other server could do that and run AI embedded in real time. Well, Donna, the green transactional power of Z combined with AI and security makes it like a marvel of engineering. So I want to thank you, Donna, for the amazing work that you do and that the team does to make the best computer systems in the world. Thank you and see you soon. Thanks, Darryl. The progress being made to advance the performance of classical computing is truly amazing. The two nanometer node chip we explored represents the absolute cutting edge of computing technology and proves that the power of bits continues to be remarkable. The way we are pushing computing to excel in AI workloads will allow pervasive and interconnected intelligent systems to provide extraordinary value. And these advancements are not just science for science sake, but for real business outcomes. It's how our research provides the technological innovation behind IBM's most advanced systems available today. For those of you that may be new to quantum computing, there's a few fundamental concepts that make it remarkably different from classical computing. We're all familiar with the term bits. It's the fundamental unit of information that classical computers use today. We've seen endless representations of this binary system in the string of ones and zeros that people come to think of as data. However, in quantum, the fundamental unit of information is called a quantum bit, or qubit. The basic idea is that this qubit could carry information quantum mechanically, or in other words, the same way that nature carries information. Simply put, a qubit is not bound to a binary system of information like ones and zeros. And that very simple difference is what makes quantum computing so powerful and so complicated. For you to truly appreciate what we've achieved, I'm going to introduce you to two physicists who can tell us more about how we got here. So Jerry, an exciting quantum research lab, tell us what did it take to build the first programmable quantum computer on the cloud in 2016? Yeah, so really in order to be able to get a system on the cloud, it started a lot in labs like this, right? Where we're working on fundamental research and understanding how to make the underlying qubits better, reproducible, reliable, and stable. So we had to work on a lot of things to basically make them to be usable and actually accessible from anyone on the cloud. Now, it's actually very interesting looking back where we are today from even a decade ago. If you take a look right here, this was state of the art in terms of our qubits in 2011. So what happened since then? Okay, so that was a decade ago, then 2016 first computer on the cloud. And then what happened in the last five years? Yeah, so really in the last five years, it's all been about actually deployed real systems. We've deployed over 30 systems since 2016, over 20 systems accessible right now through the IBM cloud, serving 150 clients, over 300,000 users, running over a billion executions a day. Wow, so that is amazing to just see that and just realize that now there's over 20 quantum computers. It's just an amazing progress. So what's next? Yeah, so really now what we're doing is we're building up towards new generations of systems. Last year, we had the 65 qubit machine coming bird. This year, we're gonna have an eagle processor over 127 qubits and moving towards 2023 over a thousand qubits. A thousand qubits, so we're gonna get a chance where that can be built. That's gonna be built in this lab here, but then we're even looking beyond that, right? So we need to look even, how do we go beyond 1,000 qubits? So this is super fridge. Our project really to build a refrigerator that's gonna be large enough to give us runway beyond 1,000 qubits. Yeah, so Jerry, so what are we seeing here? What is this giant thing? So in the other room, we saw that there are actually these dilution refrigeration systems that allow us to cool down our qubits, our quantum processors to 15 milli Kelvin. And, but like, how cold is that? That's many, many times colder than it is outer space, okay? But the point is that in order to actually cool them down, we also have all these other components that are part of it. So there's wires and there's filters and attenuators, all these different components. And as we scale up along that roadmap towards 1,000 and beyond, there's just more and more stuff. And so planning for, say, even up to a million qubits, we need to build bigger refrigeration systems and we're doing that right here. So it is a testament of what it takes to succeed in quantum computing. You have to invest for a long period of time on this sustained roadmap. And it's also a reflection of the theme of today, of talking about hard tech in computing. And I gotta tell you, Jerry, the work that you and the team do, it is the hardest form of tech in computing. And I wanna thank you for all the fabulous work and keep up the great results. We're gonna be now talking to Jay Gambetta about how we are putting quantum to good use. Hey Jay, how are you? Good, good. Just came back from spending some quality time with Jerry and it brought back memories of the launch of the IBM Quantum Experience. And it's a story that is not just about hardware. It's a huge software environment that makes all of this possible and makes quantum computer reality. So first, what was a memory of that time in May, 2016? I think what I learned most and I learned a lot was there's a big difference between doing a science experiment and building a practical system. Yeah, because in the end there's a difference. Once you expose it to that first week, you went from a few dozen people to thousands of people using the system. I mean, and the numbers are now what? Yeah, so as you say, we got up to 7,000 people in the first week and now we're up to around 300,000 users and they're running at least a billion circuits a day. You brought up quantum circuits and sometimes people get confused about what is a circuit. They imagine some physical connecting thing, but circuits, it's a software construct behind these things. What is a quantum circuit? Yeah, a quantum circuit is the fundamental unit for quantum computing. You can think of it as the instructions that can only be done on a quantum computer. It does the marvelous math that makes quantum mechanics possible. We sometimes draw the analogy of MIPS, like the number of instructions per second that you can run on a classical machine, but in quantum world is how many of these circuits can you run on an ongoing basis because some of the math that you get to do there you can only do with quantum computers, but also you got to run a lot of it. So how much do you need to run? So you can think of quantum computing as I need to run like a billion of these circuits. So take this nature paper from 2017 that we did. In this we actually ran around four billion circuits. So let's take a billion circuits as a typical application. And if you think about that, if I need to run a billion circuits and I have to wait a microsecond for it, that means I'm gonna run it in basically 16 hours or so. If I need to run it at a millisecond, it's gonna be about 11.5 days. So now you see how fast I can run these circuits really matters for doing these practical applications. So I think that does not widely appreciate it, right? Any practical application, machine learning or chemistry or optimization that you wanna leverage the power of quantum systems, you're gonna need to run hundreds of millions or a billion quantum circuits iteratively, right? To get to your answer. One of the reasons I really love the superconducting system is the fundamental physics allows us to run these with fast gates, fast reset. In fact, our latest Falcon processor has an update that gives you ability to reset the qubits in less than a microsecond. And you ask for a comparison to INES. INES have done wonderful demonstrations. They keep pushing the fidelity, they're great. But fun at the moment, they're time to cool and get their trap ready. So to get the quantum two-cubic gate to happen takes 100 milliseconds typically. And this is a big challenge for them. They've got some great demonstrations that are showing some ways to get beyond it. But at the moment, it's really, really slow for them to run it. And this is why for a superconducting technology, we can imagine we can run our circuits at a much faster pace. But again, it's milliseconds versus microsecond. There's a factor of a thousand X there. So let's bring it back to an application. You're saying to be able to reproduce state-of-the-art results, so say of a chemistry experiment, but that would be true for machine learning as well. You're talking then down the road of comparing technology that would deliver your result in tens or hundreds of days versus being able to do things in days or hours. That's the difference. This is one of the reasons we like this technology because we can see how we can build a business on it. We can run for the users, maybe thousands of applications per day with all the systems we have, and we can see that we'll be able to get lots of results. Whereas I look at these ions as a great example of a scientific experiment. It gets us to today's announcement, right, of something called the Quiskit runtime. You had set the goal for the team of achieving 100X speedup. So tell us what it is. Yeah, so I would like to correct you slightly. We got 120 times, and as a combination of the runtime, better devices, better software, and some algorithmic improvements, and I'm very happy that we got what we promised. I'd love to be corrected when it's better. But let me show you how it works. Okay, let's take a look. For this problem, the first step is the user needs to define a molecule and its electronic structure. The next step, the user needs to specify the quantum program and the circuits that will be used. Then the user constructs the VQE program that they want to run. This is based on the Quiskit runtime as we talked about. Now they simply just call solve. So now here's where the fun starts. The quantum computer is now this quantum system plus a hybrid classical server, plus the user's computer. Let's start with the user in San Francisco. The runtime program now goes to the user's computer through the cloud, and it actually, in our case, goes to the IBM cloud that's in Austin, where it's authenticated and sent to the IBM quantum data center in Poughkeepsie. Here the runtime manager starts. It sets up this new container program and this does the classical computing and it makes the circuits to be run. As you see, there's lots of circuits that are being sent to the quantum systems. These are run on our quantum systems and the results come back and you'll see there's lots of zeros and ones. Now it goes to the classical server again and it processes those results. And if the algorithm calls, it resends those circuits through to the quantum system and it comes back again, processes the results, gets the final answer and sends it back to the user that's in San Francisco and now they see, as in this example, the chemistry plot. I think everybody's gonna be feeling a set of relief to say they're not gonna need to know quantum mechanics to benefit from quantum computing. But this vision is powerful, right? Because you're gonna run your everyday program that you like and behind the scenes, this is what's gonna enable through this runtime. So Jay, that's amazing. What's next? So what's next? There's beyond the chemistry, we also have an exciting result in AI where we actually use this hybrid classical computer combination to find the correct circuits to improve an AI task. And I'm excited to bring that out. And ultimately the release of 127 qubit system this year. Yeah, so that's the thing. We talked a lot about chemistry but there is a burgeoning field of the intersection of quantum and AI and the theory team and the software team has done really seminal work that is also really influencing deeply the field and now this roadmap of more powerful machines. Thank you for you, the team and the whole Quiskit community and the IBM quantum community that are really pioneering a whole new industry. So a pleasure and talk to you soon. We've defined the industry's leading roadmap for quantum advancement through our family of superconducting and cubing processors to deliver generation after generation the most capable quantum computers in the world. A new system every year with the goal of unleashing the era of quantum advantage where we aim to achieve computational speeds that will drastically exceed classical computers. We're making the hard tech of quantum frictionless for the user. That's why we've curated, created and nurtured a global quantum community through our open source software Quiskit the world's most popular software development environment. It has helped usher a fast growing quantum developer community and we brought the power of all of this to business with our IBM Quantum Network, a global network of quantum computing partnership consisting of hundreds of businesses, startups, institutions and governments. It's been an incredible journey so far but know that we're just getting started. We invite you to join us to benefit from the growth of the quantum industry that IBM is pioneering. Let's now take a moment to talk about AI. Since the field got started in the 1950s, the journey for AI has been a long and winding one. It's been both promising and confusing and at times underhyped as well as overhyped. The grand challenge of AI has long been to create a system that can truly understand human language, not mimic, but understand, reason and learn from human language. It is an arduous task and one that still challenges us today. But the pace at which we have made progress has been vastly accelerated through new methods such as deep learning. But what if we turn these powerful new AI tools to look at languages in a completely different way? You see, languages have been the cornerstone of human progress since the beginning. And although we're not the only species communicate through audible languages, we are however the only species that has taken the next step of abstracting that language into symbols. All around us are the symbols of language. From the typographic characters of our spoken language to the numeric symbols representing the properties of mathematics to the diagrammatic maps displaying the chemistry and arrangement of molecules. Languages surround us. They help us think and reason, then allow us to share those thoughts with others. Now we have seen the power of AI applied to human language. Its importance and pervasiveness is undeniable. But at IBM Research, we are also exploring how to teach AI two new languages and their exciting implications. The language of code and the language of chemistry. This is where AI's journey is about to accelerate. We are now going to talk to IBM fellow Maya Vukovic. She and her team are spearheading great work. Let's go meet Maya. Hi Maya, how are you? Hi Daria, how are you? Maya, let's talk about what we are unveiling and what do you see as the power of applying AI for code? So our AI for code technology is going to fundamentally change how we think about coding. Let me first give you an example of some of the work we have done and then let's talk about how we did it. One of our clients came to us with a problem they couldn't crack. Imagine their mission critical application has ballooned to over 1.5 million lines of code. Decades of adding, migrating, combining different systems. Moreover, this evolution of the code happened by multiple development teams, some of which moved out to the different role or out of organization. Some of them are not even in the organization anymore. Correct, and there may not be even any documentation left. So imagine that and imagine how this kind of impacted the operations of this application over a time. So the client put a team dedicated to understand how this code works and understand which parts could be made leaner, which parts could be better built to take the advantages of the cloud's agility. And it took them over two years of trying without a result. And why is that? Well, we as humans, we are not built to go and look through 1.5 million lines of code and understand what business functions are buried in there. But luckily AI is there and AI is very good at this. That is an early problem. So how did we apply AI for code to actually solve this problem? So we built an AI model that helped us in a very short amount of time to come through all the code in this application. So what the AI model helped us not only identify just which parts of the code are obsolete or no longer in use, which parts of code are redundant, and also which parts of code can be grouped in a better, more manageable groups of code or rather microservices. Not only did AI help us recommend what are the suitable business function-driven microservices, but we can also use AI to help us generate the code for these target microservices, further simplifying the time. So that part of it is automatic is AI is helping us write the code that is a target microservice? Correct, yeah. So further it saves the time and effort for the developers. It can also tell you where the gaps, what else needs to be done to make those microservices fully executable. So as you can imagine, this simplifies and accelerates the entire application refactoring process tremendously. Not just when you think about one business applications, but look at our clients. They have thousands of applications in their portfolio. And you were giving the example that just one application, a million and a half lines of code, two years. So imagine if you have to modernize thousands of applications, right? That's kind of the power of the technology that you and the team have developed or being able to compress that time from a multi-year effort to something that you can do in months or weeks. Yeah, and this is, you know, as we said, it's where the AI is very good at. It's amazing technology and I'm just very excited about the impact that this is gonna have for our clients application modernization efforts. Very often we think about software and the role of software in business and it's really becoming the language of business. So tell us a little bit about the broader implications of AI for code and what's next. Well, I'm very excited that we are launching today Project CodeNet and making the over 14 million samples of code available as part of the open source dataset available on GitHub, right? If you thought that 1.5 million lines of code is a lot, think about it, 14 million code samples that we have derived out of half a billion lines of code, our team has extracted most representative code samples that can help us or help AI train and better help the developers write the software. Yeah, so in some ways many are familiar with ImageNet and the implications that that had for the AI field in terms of visual recognition efforts and so on and the explosion of the utilization of these datasets with deep learning to be able to propel the state of the art of image recognition. So what you and the team are doing here is something similar but now for the world of code. Is that right? Right. So that is becomes a benchmark, a dataset benchmark that can be used for the source to source translation. Yeah, I just cannot help but think again of this productivity impact of if we look in our clients and everybody's enterprise, how many software developers out there and the role that AI can provide to be able to help with modernization efforts, write better code, debug code, deploy code faster. I mean, I just think that the implications are sort of boundless and just so excited about this announcement and it's really beautiful work that you and the team are doing, right? Really, thank you. Thanks, Dario. Thank you. So from learning about the implications of teaching AI, the language of code, we're gonna now have the opportunity to learn how AI is learning a different language. Let me share with you another piece of tech in which AI is learning a different language. This time, the language of chemistry. Leading much of this work is Teo Laino and his team in Zurich, Switzerland. Hey, Teo. Few in the world know or associate the world of chemistry with IBM, but now what's very exciting is that we are training and teaching AI the language of chemistry and you've been a pioneer in doing that. Can you tell us a little bit how that works? Indeed, Dario. We have been able to learn how to teach the language of chemistry to the AI architecture and the result has been a way of accelerating discovery for designing new materials that instead of taking years and millions dollars budget can now be designed and synthesized in weeks or months. So what is Teo, the technology behind when we say AI, what kind of technologies behind the scenes are we using to do chemistry? Let me go a little bit deeper into the heart of the technology. The very first thing is that we use the AI and more specifically natural language processing architecture to curate all the chemical records from publicly available chemical and structured records. And the parallel perhaps for everybody to understand is that in the context of human language, let's say translation, right? Between say, Italian and English, we would have historically curated large amounts of human translated documents and then we would have trained these neural networks to be able to do that mapping for us. But here what you and the team did was curate large bodies of patents and publications that contain the chemical translation language, right? The diagrams of chemistry of how do you go to a reaction as an example? The initial effort was made in 2018 when we made available all the trained models and also the core of the architectures, the molecular transformer to the scientific community through a portal that we call IBM RX and for chemistry. The portal has been a great success. We have been gathering the attention and constructing fast growing community of roughly 25,000 users that have been using the AI models almost four million times in a slightly more than two years. So this gives a little bit the idea of how interested is the audience? How interested is the scientific community in the journey for the digitalization of chemistry? Yeah, Theo, I think the work that you and the team have done on the core AI, the digital platform, IBM, RRXN, the digital robot that then helps you synthesize has been really been remarkable. It is the pioneering of a whole new field. And I think, Theo, what you and the team have demonstrated in the example of chemistry, which is notoriously difficult and challenging language, is really remarkable and it's illustrative of the potential. So thank you very much for the first for the fantastic work and for spending some time with us today. Thank you, Dario. You just saw two powerful examples of how IBM research is advancing the state of the art in AI and reimagining how teaching AI new languages will accelerate software productivity and scientific discovery. Language, automation and trust are the three pillars of IBM's AI strategy. With innovation from IBM research now being integrated into IBM's Watson AI portfolio faster than we've ever done, this is the time to scale trusted AI for business. Our journey today started by inspecting the world at the nano scale as we flew through IBM's latest groundbreaking work in semiconductors. Now, for our last stop, we're going to widen our aperture and talk about the world's largest and most powerful computing resource ever created, the cloud. Let's take a second to visualize it. This globe shows the major data centers of the world's top public cloud providers. Hundreds of locations in dozens of countries that spans nearly every continent. However, this only paints a portion of the picture. What is not being shown are the massive number of private computing environments that exist in silos across the globe. The cloud has dramatically evolved over many years to what it is today, a massively distributed network of public and private data centers comprising zettabytes of computing power and data storage. For us to fully appreciate the engineering and networking marvel that is the hybrid cloud, which combines public and private environments, we must appreciate the software that runs it. Enter the world of hard tech software. Hey, Priya, how are you? Hi, Dario. How are you? It's so good to see you. I remember a couple of years ago when you were telling me, Dario, for all the progress of what's happening on cloud, we gotta get to the point where we get the cloud to work as if it was a single, infinitely powerful computer. So what do you mean by that? Well, Dario, think about the simplicity of just working on your laptop. You have a common operating system, tools you're familiar with, and most importantly, you're spending most of your time working on code. Developing on the cloud is far from that. You know, you have to understand the nuances of all the cloud providers. There's AWS, Azure, GCP, IBM, private clouds. You have to provision cloud resources that might take a while to get online. And you have to worry about things like security and compliance and resiliency, scalability, cost efficiency. It's just a lot of complexity. When I think about the heterogeneous nature of the cloud, everything from large data centers to the edge, all these complexities that you're talking about, is there a prayer that we can actually address it and realize this vision? Yeah, indeed. I think it's one of the greatest challenges that we should solve right now in computer science to harness this tremendously heterogeneous and distributed system. I think there are two key elements to a good software architecture for this. So first is open technologies. And the second one is the right software abstractions. Now, open technologies because, you know, proprietary software stacks from different vendors not only add to all this complexity, but they stifle innovation. Key software abstractions start with the operating system, which is Linux. So as you know, Dario, Linux as the operating system for the data center era really unleashed this proliferation of software, including virtualization technologies like containers. And that ushered in the cloud era. Now hybrid cloud is no different. We now need a distributed operating system to provide us that common layer of abstraction across these heterogeneous and distributed cloud resources. And Kubernetes is the open technology that's emerging as the winner in this evolutionary battle. So you have Linux containers, Kubernetes. So these are the open technologies. And, you know, when it comes to enterprise ready supported, the most secure versions of this software, you have Red Hat Enterprise Linux or REL and OpenShift. This is our hybrid cloud platform. And this is the foundation for our cloud computer. So that is the path to be able to wrangle this complexity into something that gives you productivity. But you and the team are also pushing a vision that goes even beyond that. This world of serverless. So what is it and why is that so exciting? That's right. So serverless technologies, that's the key to realizing this vision of the cloud as a computer. So there are three key attributes to serverless. The first one is ease of use. And there's on demand elasticity and pay for what you use. So let me give you an example. So take a simple data prep task on the cloud, which is fairly common, but the data in this case could be coming from anywhere, literally, edge environments, for example. And to make this as simple as a command you could issue on your laptop, a lot of things have to happen under the covers. And today it's the developers and the data scientists doing these things manually. So I have to worry about, do I have access? Am I allowed to move the data? Where are the API keys? How many containers should I spin up? And this is what I spend most of my time on. But with serverless, you can literally boil this down to one single command as simple as moving files around on your laptop. And the serverless platform does the rest underneath. So that's the beauty of serverless. And we are pushing this vision forward today in the K-native open source community. And just like with Linux and Kubernetes, there is a supported enterprise-ready version of K-native that's available on OpenShift today. It's called OpenShift Serverless. It's also available on IBM Cloud called Code Engine. So you can try these out today, but we continue to push this evolution of serverless and it's getting us closer and closer to that vision of the cloud as a computer. Yeah, the cloud has an infinitely powerful computer, but working as if it was a computer. I love the vision and how you and the team and the extended Red Hat team and the whole Open Community is making this a reality. So thank you, Priya, for everything that you do and for the fantastic work. And I look forward to talking to you soon. Thanks, Dario. See you. We are well down the road of executing our vision of making the world's hybrid cloud resources as easy to use as a single computer. When we do, we will finally realize the full revolutionary potential of the cloud, the ability to get what we need when we need it down to the millisecond with the click of a button. Today's Red Hat Open Hybrid Cloud Platform, built on Linux and Kubernetes, provides the essential interoperability layer to seamlessly blend the computational powers of high performance computing, AI and quantum across cloud providers and across public and private environments all the way to the edge. It is a computing architecture that will enable us to discover faster, solve more complex problems and push not only science, but business to new frontiers. I hope that what you've just seen makes you reflect on what it means to be a tech company. The kind of tech IBM does, the types of problems we're attracted to, those define who we are as an organization. We took a journey that span nanotech and semiconductors, the world's most secure and highest performing transactional systems, now also infused with AI and quantum computers with implications so profound that they are branching the category of computing itself. We explored how it all comes together through a visionary serverless open architecture built on Linux and containers designed to ultimately make the massively heterogeneous and distributed computers that make up the hybrid cloud work as if it was a single infinitely powerful computer. Each of these technologies has vastly different methodologies and complexities, but what ties them all together is that they collectively represent the future of computing. I want to close my time with you by sharing a reflection on why I focus so squarely on computing, about why computing with a capital C matters. It comes from another institution I love, MIT, one that has also deeply shaped me. Just a couple of years ago, the institute came to the conclusion that a new college needed to be created, one that will be the most significant change in their last 50 years. They thought long and hard about what to name it, ultimately settling on the College of Computing, not the College of AI or of cloud or of quantum, but of computing. The core vision is both simple and profound, that all future professions, be it of an architect, an economist, an engineer, or a scientist, will be defined by each of their disciplines plus computing. This has been the IBM story for over a century. It is the story of computing and its impact on business and society. And we have a strong pipeline of innovation as we have ever had, now multiplied by the power of open innovation. Our vision of combining the capabilities of high performance computing, AI, and quantum in the hybrid cloud is a game changer for how we can use computing to accelerate the discovery of new solutions. Problems long believed to be out of reach are finally being dusted off and looked at anew. I believe that this will bring about a new era of accelerated discovery that will allow us to scale the scientific method across business and society, discovery-driven endeavors and businesses making extraordinary breakthroughs and innovations. My colleague, Michel Braude, IBM's brilliant general counsel, summarizes the character of IBM as a company that brings innovation with trust and empathy to our clients. It is a model of working that brings me great joy as an IBMer and director of IBM research. I hope you enjoyed taking a look inside our labs. It's been my pleasure to give you a glimpse of the amazing work being done here at IBM. Gracias.