 Thank you and good morning and welcome again to AI Research Week. We hope it'll be a very exciting day. This is where I work. This is the global headquarters of IBM Research. It's about an hour north of New York City. And in the building alone, we have about 1,500 scientists. And it's part of a global network of laboratories across IBM Research, where we have 19 locations and 3,000 scientists. Now, next year, we'll mark our 75th anniversary of running a research division. So we're really, really passionate about our commitment and our investments in science. And we're just delighted to be able to have all of you as collaborators and as part of the community devoted to advancing science. I'd like just to take 30 seconds to give you a little bit of context of IBM and why we make these commitments to science. IBM is an enterprise company where a community of 350,000 IBMers that operate in over 170 countries. And what we're really passionate about is the idea of progress, of living at the intersection of advancements in science and technology and people. And how do we bring together to make institutions work better? And we operate across a whole variety of areas, from AI and security and cloud and services to make, essentially, the world run behind the scenes. And that's what really motivates us every day. And to do it, and I think this is more important than ever, given the advances in technology with a philosophy of trust, to be able to be good stewards as we make advances in technology and bring it responsibly into the world. Now, behind the scenes, IBM is the only company that has been doing information technology and computing for over a century. I like to share that fact, because the other side of that equation is that the normal course of events, if you're doing information technology and computing in business, integrated over time is to disappear. So why haven't we? I really believe that at the core of the reason why we've been able to thrive for so long is because of our commitment to research. And IBM research is an institution. And we've always maintained this blend and this balance between really fundamental investments in core science of the type that, as you see an example, six IBMers have won a Nobel Prize, to bring an impact into the world. So that's the companion of our investments, is to go and take extraordinary scientists and engineers working together. And in the end, take these advancements and bring them purpose and bring them scale. So from some of the very first programming language to hard disk drives, to memory, like DRAM, to seminal advancements in AI and in quantum computing and too many semiconductor advancements to mention here, we've had a long and proud history of this commitment and driving impact. What we do is we bring a passionate community of scientists and engineers from a broad variety of disciplines, physicists and mathematicians and engineers of every discipline, but with a lens. And the lens with which we look at the world is the lens of information and computation. That's what binds us together in the community of IBM research. And what I wanted to be able to do in my short remarks today is to be able to share with you a vision of where the future of computing is going. I firmly believe this is the most exciting time in computing since the 1940s or 50s. And I want to share with you a perspective of sort of three foundational technologies that are going to come together within this decade and some of the implications that it will have. But this intersection of bits plus neurons plus qubits coming together is going to fundamentally alter and transform how we understand computing and what it's going to be possible to do. If we were to summarize briefly the last 60 years of computing, I think it would be fair to say that it was based on a few core ideas. Of course, classical information theory that Cloud Shannon, another key pioneers, gave us. And we're all very familiar, of course. We had the foundational idea of the binary digit and the bit. And actually this sort of understanding that we can look at information abstractly. And as Shannon advocated, this separation between almost this platonic idea of zeros and ones to decouple it from their physical manifestation was an interesting insight. And it's actually what allowed us for the first time in history to look at the world and look at images as different as this, right? A punch card and DNA. And we come to appreciate that they have something in common, that they're both carriers and expressors of information. Now there was another companion idea that was not theoretical in nature that was practical and it guided the industry and that was Moore's law. This is the recreation of the original plot from Gordon Moore when he had four data points in the 1960s, kind of an observation that the number of transistors that you could fit by unit area was doubling every 18 months. And he extrapolated that. And amazingly enough, that has happened over 60 years, not because it fell off a tree, thanks to the work of scientists and engineers working incredibly hard. I always like to cite, to just give an example of the level of global coordination in R&D that is required, $300 billion a year is what the world spends to move from node to node. Just imagine the sheer amount of talent and effort that has to come together to put one of those dots, right? As you move down the line, but that's what happened. And the result of that is we've digitized the world, right? Essentially, bits have become free and the technology is extraordinarily mature. So just to give you an example, now we build machines we just announced last week our latest generation of Z systems. You can build machines with five nines of reliability. You can build a single machine that can process a trillion transactions a day. That is just phenomenal in terms of the possibilities. And my product of all of this is that there's a community of over 25 million software developers around the world that now have access to digital technology all over the world, creating and innovating and all witnessed as that is why software has become sort of like the fabric that binds business and institutions together. So very, very mature technology. We are of course pushing the limits. Now we are reaching the limits too. In Moore's law, there was sort of two components. There was density scaling and then there was device scaling. The devices got better as we shrunk. Well, that stopped working since 2003. So we just have density scaling left. And even in here, we're going down seven nanometers, five nanometers, three nanometers, but we're reaching the limits. And there is a limit. In the end, there is atoms. Now, we're also, you know, in IBM research, we're very well known for expertise in nanotechnology. The invention of STM and AFM was done in IBM Research Zurich. And this is kind of the limits, right? You can actually, it turns out you need 12 atoms, magnetic atoms to store a piece of information. So you're seeing here things spelled with ASCII characters with 480 atoms that have been arranged. So in the end, there is a limit of the physical properties. So we need to explore also where alternatives are way to represent information in richer and a more complex way. So this is the world of bits. Now the world of bits, in some way, we can also look at it as this intersection between the field of mathematics and this lens of information. We also know that there's been another idea that has been running for well over a century now, which is the intersection of the world of biology and information. And Santiago Ramonica Hall, at the turn of 1900s, was among the first to understand that we have these cell structures in our brain called neurons and that neurons have axons and that the intersection of axons form synapses and the linkage between this neural structure and memory and learning. And as we all know, it wasn't with a whole lot more than this biological inspiration that starting in the 1940s and 50s and of course to today we saw the emergence of an artificial neural network that took that loose inspiration from the brain and we have witnessed the rapid evolution, rapid in quotes, took many decades, but we have seen the evolution of this and of course the deep learning revolution that happened that has been now become broadly understood as implications since 2012. I think we're delighted also that Joshua is here with us, like one of course, you know, a foremost scientist in this field and we look forward to his remarks in a minute, but we've witnessed what has happened over the last six years in terms of this intersection between the bit revolution and what happened with the consequence of digitizing the world and having more and more of that digital data available to train and the computing associated revolution so that we have big enough computers to train some of these deep neural networks at scale. We've seen the progress in the field and we've seen the consequences of that on the passion and enthusiasm of the community. This is just illustration of levels of enrollment at Stanford MIT, but it's true, you know, in all the leading universities in terms of students who are passionate about looking at the world through this lens of machine learning and information processing and from what used to be, you know, maybe a hundred students looking into this, now we have thousand students who have been enrolled in these kinds of courses. Having said all of that, we have now been able to demonstrate that fields that have been with us in AI and computer science for a long time, like speech recognition and language processing and so on, have been deeply impacted by this approach and we've seen the accuracy of these environments really improve, but we're still in this narrow AI domain. So one advice that we've been trying to advocate because the term AI itself, it's a mixed blessing, right? It's a fascinating scientific and technological endeavor, but it's a scary term for society. And when we use the words AI, we often are speaking past each other. We mean very different things when we say those words. So one useful thing is to add an adjective in front of it. Where are we really today in that a narrow form of AI has begun to work? That's a long cry from a general form of AI being present. And we're seeing dates here, we don't know when that's gonna happen. You know, my joke on this when we put things like 2050 when scientists put numbers like that is like what we really mean is we have no idea, right? So the journey, of course, is to take advantage of the capability that we have today and to push the frontier and the boundary towards broader forms of AI. We simply put in some ways we are passionate advocates within IBM and the collaborations we have around bringing the strengths and the great traditions that are also within the field of AI and bringing neuro-symbolic systems together that as profound and as important as the advancements that we have seen in deep learning, we have to of course combine them with knowledge representation and forms of reasoning and bringing those together. And that the journey going down that frontier is so that we can build systems that are capable of performing more tasks and more domains. And very importantly, and Sascha will get a chance to talk in a little bit this morning about this, is that we build trusted AI systems, which refers back to my comment at the beginning that as technology gets more powerful, the dimension of trust becomes more and more essential if we're to fulfill the potential of these advancements and get society to adapt them. So in this journey of neuro-symbolic AI, let me take a little bit of what I apologize, I'm a little bit under the weather. In this journey, I think it's gonna have implications at all layers of the stack. One interesting trend that we have seen, a consequence of when I was talking about Moore's Law, the fact that devices did not get better after 2003 as we scaled them, there were a set of architectural innovations that the community responded with. One was idea of multi-course, adding more course in a chip. But also was the idea of accelerators of different form, that we knew that a formal specialization in computing architecture was going to be required to be able to adapt and continue the evolution of computing. This has been illustrated over the last six, seven years, just to give you a flavor. Last year, we delivered the number one and number two supercomputers in the world to the Department of Energy. They're capable of Summit and Sierra. They're capable of performing 200,000 trillion calculations per second. Every once in a while, it's actually useful to stop back and look at the numbers and reflect what we have achieved as a community because it's kind of mind blowing that it's possible to build these kinds of systems with the reliability. But what we see architecturally here is that you're bringing this blend between large number of accelerators and a large number of CPUs and that you gotta create system architectures with high bandwidth interconnect because you gotta keep the system utilization really, really high. So this is important and it's illustrative of what this future is gonna be back of this combining sort of this bit and neural based architectures. And just to give an example of this, these are some recent results from Songhan and Chuangang from IBM and MIT where you're seeing here results of, this is a state of the art video activity recognition model where with one Summit node that has six CPUs and it's a demonstration of this kind of scaling that you can achieve with these systems, it took essentially two days to be able to do that and you can get it down to about 14 minutes using 256 node. So if you actually look at the performance, the theoretical performance of a scaling of this system versus what was actually realized is actually very, very, very close. So it's an example of what it takes to design systems that bring these architectures with high level utilization and performance. So this is very interesting to be able to design systems for this kind of like peak level of performance. And we're actually delighted that within the context of the MIT IBM lab, we've donated a slice kind of a version, a mini summit of it. And this is the boot up sequence, John Cohn was sharing with me a couple of days ago that is just booting up right now. His name is Satori, it's gonna be a four petaflop system. So that's very, very exciting, right? So I think this is like the kinds of tools we're gonna need. Now, having said that, the hardware that we have today is not gonna be enough for the needs of the AI community. We're very committed to continue to evolve architectures and device innovation all the way down to materials for the decade ahead and beyond. That's why we launched the AI hardware center in Albany this year and we've committed $2 billion to be able to create this roadmap and deliver on this roadmap beyond today's GPUs. So if you look at it, what is the core of the issue? If you look for some very state of the art models and you can see some of the plot in terms of petaflops per day from some examples of recent research work that has been happening in the community including Alpha Zero and Alpha Go Zero as a function of time. One of the things that we witness here is that the compute requirement for training jobs is doubling every three and a half months. So we were very impressed with Moore's law, doubling every 18 months and this thing is doubling every three and a half months. That is a real problem. Obviously if we're unsustainable, if we keep at that rate for sustained periods of time, we'll consume every piece of energy that the world has to just do this. So that's not the right answer. So there's a dimension of it that has to do with hardware innovation and there's a dimension that has to do with algorithmic innovation. So this is a roadmap that we have laid out in terms of the next eight years or so of how we're gonna go from digital AI cores like we have today based on reduced precision architectures to mixed analog digital cores to in the future perhaps entirely analog cores that implement very efficiently the multiply accumulate function inherently in these devices as we perform training. Even in this scenario, which is still gonna require billions of dollars of investments and a lot of talent, the best we can forecast is about two and a half X improvement per year. That's well short of three and a half months of doubling computing power. So the other side of the equation, we have to deliver this for sure, but the other side of the equation is the work that you all do. And that is the aspect that we got to dramatically improve the algorithmic efficiency of AI and the problems that we saw. And that is what we're so passionate supporters of collaborating with academia and university to work on all of this together. Both through the AI horizons network and our commitment to the MIT IBM Watson AI lab. And just to give you a flavor of our support and our commitment to this, you see within the AI horizons network, the extraordinary partners and academic institutions that we're delighted to collaborate with and over 60 faculty involved in this network, 140 students, 80 IBM researchers, and we're focused on 10 different strategic areas within AI. So sort of a broad view and a broad level of engagement in AI. Parallel also with the MIT IBM Watson AI lab, we have 70 projects running at presence since we launched the laboratory a little over two years ago with 23 different departments being engaged in participating in this activity. And the testament of the wonderful collaboration that we all have here is the fact that, our commitment also to scientific publication has been over the last few years, over 100 scientific publications in top journals and conferences and I'm really confident that it is through these mechanisms of collaborations that we will make the progress that is required in the field of AI to move towards this broad AI era. And today you're gonna get a chance to see some of the progress and through the dialogue that we would have with leading researchers, how can we learn more from less data and the combination of learning and transfer learning and knowledge representation and reasoning? We'll have panels towards the end about this frontier of the physics of AI and pushing new capabilities for infrastructure. But of course, fundamentally important, how do we build a trust layer in the whole AI process around explainability and furnace and the security of AI and the ethics of AI and the entire engineering life cycle of models and what it takes to bring AI engineering in practice to reach the impact of the technology. So I'd like to take the last few minutes to share with you the third pillar of this future of computing. And in the same way that I was alluding to that, this intersection of mathematics and information was the world of classical bits and biology and information gave us an inspiration for neurons. It is physics and information coming together that is giving us the world of qubits. And for us in IBM, this journey actually is something that we've been pursuing for many decades. It began with Ralph Landauer, right? Great physicist, he was an IBM fellow who hired at the time Charlie Bennett. This is a recent picture of Charlie, not then when he was hired, who is one of the fathers of quantum information theory. And there were physicists asking questions about the world of information. And it was very, very interesting. They would ask questions like this. They would ask, is there a fundamental limit to the energy efficiency of computation? Or they would ask questions like, is information processing thermodynamically reversible? The kinds of questions only physicists would ask, right, in looking at that world. And sort of pulling out that thread and this assumption that Shannon had gave us of separating information and physics, he says, don't worry about that coupling. They actually poke at that very question as to whether that was true or not. And, you know, in fact, this is a photograph of a notebook from Charlie Bennett that he shared with me of the first time that we believe these words were written down, quantum information theory. And, like, look at the date there. It's 1970, right? It's from the time we had started looking into this. And there's interesting stories behind the scene about why it says false and so on, right? So that's for a different time. Turns out he wasn't right about that. Anyway. So what did we learn as that thread was pulled, you know, in a very simplified fashion? We learned that the foundational information block is actually not the bit, but something called the qubit, short for quantum bit, and that we could express some, actually, fundamental principles of physics in this representation of information. Specifically for the world of quantum computing, three ideas, principle of superposition, principle of entanglement, and the idea of interference actually have to come together for how we represent and process information with qubits, which I won't take time today to describe and talk about. But the reason why this matters is because we know there are many classes of problem in the world of computing and the world of information that are actually very, very hard for classical computers to solve. That in the end, we're bound by things that don't blow up exponentially in the number of variables. Now, a very famous example of a thing that blows up exponentially in the number of variables is simulating nature itself. And that was the original idea of Richard Feynman when he advocated the fact that we needed to build a quantum computer or a machine that behaved like nature to be able to model nature. But that's not the only problem in the realm of mathematics. We know other problems are also have that character. Factoring is an example, right? Traveling salesman problem, optimization problems. There's a whole host of problems that are intractable with classical computers and the best we can do is approximate them. Now, quantum is not gonna solve all of that. There's a subset of them that it will be relevant for, but it's the only technology that we know that alters that equation of something that becomes intractable to tractable. And what is interesting is we find ourselves in a moment like 1944. So this is a picture in Bletchay Park in 1944 of what is arguably the first digital programmable computer. And in a similar fashion now, we've built the first programmable quantum computers. And this is just a recent advent. It just happened in the last few years. So in fact, in the last few years we've gone from that kind of laboratory environment to build the first engineered systems that are designed for reproducible and stable operation. This is a picture of IBM Q System One that sits in Yorktown. That's an exemplar of the progress we're making in this field. So what we've done and what is, I really love about what is happening right now. So this is an example of what happens. You can sit in front of any laptop or all over the world, you can write a program now and it takes those zeros and ones coming from your computer. It goes into, in our case, we use superconducting technology, converts into microwave pulses about five gigahertz, travels down the cryostat with superconducting coaxial cables. This operates at 15 millikelvin and then we're able to perform the superposition and entanglement and interference operations in a controlled fashion in the qubits. Then be able to get the microwave signal readout, convert it back to zeros and ones and present an answer back. So it's a fantastic scientific and engineer to the force but what's really, really exciting is we're making it available to the world. So in 2016, we were the first company in the world to put one of these quantum systems on the cloud and it's been an amazing reaction since. I wanna show you a little video of the community that has been created since then. Let's see if this can play. So this is, since we put the first system online, these are the users all over the world. You'll see that now we have over 150,000 users who are learning how to program these quantum computers from program, there's been over 200 scientific publications being able to generate with these environments. So it's the beginning of, I'm not gonna say a new field, the field of quantum computing has been with us for a while but it's the beginning of a totally new community, a new paradigm of computation that is coming together. So one of the things that you're gonna see too is we gave access to both a simulator and the actual hardware and now it has crossed over, right? Now what people really want access is to the real hardware to be able to solve these problems. There's an amazing community now of companies, right? And universities all over the world, right? Who are creating curriculum working in this space since we launched a year and a half ago, the Q network over 78 members are part of this. So it's something enormously exciting what is happening. So where are we going next? Well, what we're going next is we have to go through this journey of getting the world quantum ready and get to the era of quantum advantage, which in the 2020s as we go towards that goal is when we will be able to utilize quantum computing for scientific and commercial advantage. There's a roadmap for it. You remember Moore's law that I told you? So we're committed to at least doubling quantum volume every year. So for now, three years in a row we've been doubling the quantum volume from four to eight to 16 of these systems. So if we keep this space or better we will achieve pretty spectacular results with quantum computing. So let me bring it to a close and make an argument that finally we're beginning to see an answer to what is happening at the end of Moore's law. It's a question that has been the front of the industry for a long, long time. And the answer is that we're gonna have this new foundation of bits plus neurons plus qubits coming together over the next decade. The different maturity levels, bits enormously mature, the world of neural networks and neural technology, of course, next in maturity, quantum the least mature of those. But it is important to anticipate what will happen when those three things intersect within a decade. And I think the implications it will have for intelligent mission critical applications for the world of business and institutions and the possibilities to accelerate discovery are so profound. And I'll close on that thought. Imagine the discovery of new materials which is gonna be so important to the future of this world. In the context of global warming and so many of the challenges we face, the ability to engineer materials is gonna be at the core of that battle. And look at the three scientific communities that are interested in the intersection of computation and that task. Historically, we've been very experimentally driven in this approach of the discovery of materials. But here, you'll have the classical guys, the HPC community that has been on that journey for a long time. Says we know the equations of physics, we know we can be able to simulate things with larger and larger systems. And we're quite good at it. There's been amazing accomplishments in that community. But now you have the AI community says, hey, excuse me, I'm going to approach it with a totally different methodology, a data driven approach to that problem. I'm gonna be able to revolutionize and make an impact to discovery. And then you have the quantum community who says, that's the very reason why we're creating quantum computers. That was Feynman's idea. Then we will build these machines to be able to do that. All three are right. And imagine what will happen when all three are combined. And that is what is ahead for us for the next decade. Thank you.