 I'm Mukesh Khare. I'm the Vice President of Hybrid Cloud Research at IBM. We have reached a turning point in computation, and we must invent ways to accelerate discovery. As we grapple with major challenges, like a global pandemic and a warming climate, the next generation of computing infrastructure will define how we respond to these and future crisis. A new generation of advanced computing accessible in the hybrid cloud will empower collaborative communities of discovery, bringing AI and virtually limitless computing resources of hybrid cloud to bear on all facets of scientific discovery. With the help of AI and the computers that power it, our communities will accelerate and scale scientific discoveries at a pace we have never seen before. To progress, we must overcome longstanding computing bottlenecks in scientific discovery and scale it. This includes scaling human expertise through automation, expanding hypothesis beyond a limited creative design space and bridging digital methods and simulations with physical experimentation. This also means analyzing and interpreting huge volumes of experimental data and communicating results in a way that is consumable, continuously building on existing knowledge. The last two decades have seen the emergence of big data-driven science. This new discovery workload is dominated by a flood of observational data. So what does it all mean? It means we must invent new computing paradigm to accelerate scientific discovery. Let's understand how we can solve these problems using AI hardware. Traditionally, the answer to running big data workload like these was brute force. We would just throw ever more processing power in the form of high-performance computing and exascale systems. Now we are focused on building a flexible hybrid cloud platform that allows companies to run specific workloads on the hardware best suited for them. This means tapping quantum computers through cloud for discovering new materials or seamlessly accessing purpose-built AI hardware for natural language processing. This new scalable hybrid cloud platform will unify local environments in a virtually limitless pool of computing power and capabilities. AI workloads in particular bring new requirements for hardware acceleration that are leading to a more diverse computing environment that is desegregated and heterogeneous. We are working to evolve AI from a technology that can perform narrowly defined tasks using large amount of label training data to one that can more broadly learn and reason, apply cause-effect understanding of data, and integrate information from multiple formats and multiple domains. We are building AI with fluid intelligence. But such state-of-the-art workloads require breakthroughs in processing power, memory, and bandwidth to fully mature. The IBM Research AI Hardware Center plays a central role in accelerating the development of hardware purpose-built for the needs of AI. Launched in early 2019, the AI Center is the nucleus of an ecosystem of research and commercial partners collaborating with IBM to accelerate AI optimized hardware advances. AI is today stretching the limits of the current generation of computing hardware. AI systems are one of the biggest guzzlers of computing energy. Since 2012, the computation requirements for large AI training jobs have grown more than 300,000 times. Put another way, large-scale AI models can emit as much carbon as five cars produce in their lifetime. This colossal energy consumption is contributing to the climate crisis rather than helping solve it. The AI Center's goal is to improve compute performance efficiency 1,000 times by 2029 with the annual delivery of new AI accelerator cores targeting 2.5x improvement per year. We have more than doubled that gain in the first year of AI hardware center. The center now has 14 members accessing energy efficient technologies to power AI application and software advances. The AI hardware center is working on several projects in tandem that will help us reach our 1,000x goal and enable more sustainable computing. We have a holistic end-to-end approach to AI hardware, spanning the entire computing stack from fundamental material research to chip design, algorithm, and AI workload. For example, we are a global leader in approximate computing architecture and software techniques for deep learning with our digital AI cores. One of our newer members, Red Hat, is teaming with us to build a software stack for our digital AI core, making them compatible with OpenShift ecosystem. These accelerators are built with capabilities that enable them to be deployed across hybrid cloud infrastructure, multi-cloud, private cloud, on-prem, and Edge. In a related development, the AI center is launching the third generation of our digital AI core that is fully integrated with Red Hat OpenShift platform. We also produced analog AI cores, paving a path to ideal analog core behavior by combining materials, devices, and algorithmic innovations. Here is a short video of an analog chip manufactured partly by our Foundry partner, Samsung, and partly in our Albany facility through our engagement with New York State. At IBM Research, we're developing a new class of AI hardware, purpose-built to exploit the potential of AI, yet requiring drastically less power to accomplish its tasks. Analog AI delivers radical performance improvements by combining compute and memory in a single location, eliminating the need to shuttle data between storage and logic. Removing overhead means an explosion in performance, increasing the speed and energy efficiency needed for the next steps in AI. Neural network models are behind most of today's AI gains. Each AI model is described by a unique neural network structure that can be mapped to the chip, layer by layer, and tile by tile. Hundreds of thousands of analog AI devices are inside each tile of the chip. And every analog AI device stores a synaptic weight that the AI model uses to perform operations from tile to tile. This new chip is a marvel of material innovation. It uses phase change memory to encode the weights of a neural network. When an electrical pulse is applied to the material, it changes the resistance of the device by modulating the phase change memory between the amorphous and crystalline state. This in-memory computing hardware increases the speed needed for the next generation of AI, opening up possibilities for far-reaching uses of the technology, so projects with massive AI components, like natural language processing, reasoning, and autonomous driving, can explore in new ambitious ways. Innovators have never seen AI acceleration like this before. IBM Research, inventing what's next. Beautiful, isn't it? That's our latest analog AI code. Our AI center innovation also include heterogeneous integration chip packaging to eliminate the memory bandwidth bottleneck. In fact, we recently launched a new advanced packaging center in Albany with six members focused on advanced packaging. And we are creating a software ecosystem for AI hardware tools, as well as AI test bed environment and ecosystem for AI tool exploration and accessing workloads. We are also working to bring AI research tools such as AI for Business and industry automation to two top 500 supercomputers, including AMOS and AMOS X supercomputer, located at the campus of AI hardware center member, RPI. Our effort to develop more efficient and powerful AI hardware holds immediate promise for the creation of transformational technology. Our vision is to build a seamless hybrid cloud platform that uses the power of AI to extend scientific discovery to a broader set of domain, moving beyond the natural science and into everyday lives of billions of people. This will provide an essential basis of knowledge for informed decision making at many discovery driven enterprises. Critical decisions must be informed by science, not intuition. Our advancement in AI hardware are helping to enable models and methods that will forever change the way we solve problems. This is What's Next. Thank you.