 Today I have the pleasure of sharing with you our perspective on what's next in AI and how that perspective informs and shapes our research agenda in AI. And a discussion on what's next must of course begin with an understanding of what's the state of AI today. And while this will be a simplification, it's reasonable to say that today's AI is capable of superhuman feats of pattern recognition, leveraging enormous compute and data sets. And we have seen these capabilities dramatically improve as more data and more compute power has become available. Yet today's AI lacks something. Unfortunately, today's AI, it's narrow and inflexible. What's next in AI is about achieving fluid intelligence. For us, fluid intelligence is three key characteristics. Intelligence that is adaptable, that is robust, and is able to learn on the fly. Adaptable because intelligence is able to use experience and knowledge from one task and use it in a different domain. Robust because it's able to adapt to the fact that when the intelligence is deployed, the situation may not be precisely the way it was trained and learn on the job because when information is missing, we are able to acquire knowledge or even experiment on the fly to learn new knowledge to adapt to new situations. The critical element that lent fluidity to human intelligence is our ability to abstract and reason. And that ability came from the fact that we are able to model the world with symbols. We then invent languages to connect those symbols, create relationships and higher order concepts, and our symbols and languages have allowed us to dramatically expand what we are able to do. Over the course of history, we've invented languages not just to communicate, you know, English, French, Spanish, everything that we use to communicate, but we invented languages for chemistry, for mathematics, for physics, for biology, even the languages that we have invented to program digital computers. And using these symbols and languages, we have done a few important things. We have been able to acquire, represent, share and communicate knowledge. We are able to reason over that knowledge to generate new facts and those new facts have allowed and guided our actions. There's a strong tradition of building AI based on knowledge representation and reasoning. And in fact, that was the original tradition in AI. But it's clear to us that the journey to fluid intelligence will require us to merge these two traditions, to bring together neural techniques and symbolic techniques, bringing together the best of both. And at IBM Research, we have a strong deep investment to advancing neuro-symbolic AI towards this journey of fluid AI. But a discussion on what's next in AI would be incomplete without discussing how we can help businesses and enterprises rapidly adopt AI at scale, whether it is today's AI or the more fluid AI of tomorrow. At IBM Research, we have a deep investment and a strong innovation agenda in helping enterprises rapidly scale the adoption of AI. And the focus of that agenda is to remove all of the friction points that slow down enterprise adoption. Our agenda in AI engineering has two main pieces. The first piece addresses scaling of AI in the enterprise. The second addresses the compute scaling, recognizing that AI models are increasingly making huge demands on compute infrastructure and how to address that scaling challenge. Let me begin with a discussion on enterprise scaling. At IBM, we have the luxury of working with thousands of clients across the globe in every possible industry and helping them on their journey to AI. We're clearly seeing a shift from isolated proofs of concept and single AI deployments into strategic programs to infuse AI into every business process, application, and workflow. And looking across all of those client engagements, we see that if you look at the AI lifecycle, which begins with data and all of the work that must happen to discover, curate, cleanse, and prepare the data for AI, then actual modeling, and then eventually deploying and integrating the model and managing the lifecycle of the model, the friction points that our clients see really map to actually these three pieces. Their friction points are in data, skills, operations. How do I do my data preparation, discovery, and cleansing faster? How do I scale up model building because I do not have enough skills for all of the AI that I want to build? And how do I drive more and more speed and rigor into the management of the model? When I operate them, while taking it account, enterprise needs for security, risk, governance, compliance, and so on. And we are bringing the latest innovations in AI to each of these problems. As an example, we are using leading edge innovations in neural embeddings and graph neural networks to drive automation into data discovery and cleansing. In data science automation, we're going beyond simply automating the modeling, auto ML, into looking at feature engineering and optimization of feature engineering and building models that adhere to enterprise business constraints. And when it comes to lifecycle, we are developing AI assisted tools to continuously monitor, predict model performance, alert potential scenarios for decreased model performance, and even automate remediation and improvements of models. So this is a rich agenda and a portfolio, but let me illustrate a few examples of our work in this space. Let me begin with an example from our work in data automation. By bringing in our latest AI techniques to the problem of understanding the relationships between the vast troves of enterprise-stabular data sources, thousands of data sources that many of our clients have, we're able to bring AI to dramatically reduce the time it takes to identify linkages between data stores and do metadata classification and labeling. And we have seen improvements of over 90% in being able to do metadata discovery and linking. Let's look at examples from data science and my earlier comment on feature engineering. A very common pattern in enterprises is when data scientists are dealt with a huge database of number of tables with hundreds of columns and they spend months figuring out which subset of data and operations on transformations in the data allow them to make progress on their modeling activity. And our work in using AI allows us to automate that piece and automate the feature engineering where we have seen dramatic reductions two to five X reduction in the time it takes and an order of magnitude reduction in the lines of code that a data scientist has to write to do feature engineering. And in fact, our clients are experiencing a number of these advantages. One of our clients has been able to go from hundreds of features for their models to thousands of features, including several new features, tens of features that our system automatically discovered and had never been used by the human data scientist. And this expanded capability allowed them to drive a three X improvement in their business KPIs. So these are examples where by bringing in AI to the automation of the AI lifecycle, we have been able to drive acceleration and remove some of the friction points in adopting AI at scale. But there is a fourth dimension to automation that's equally important in an enterprise setting and that is governance. And before I talk about governance, I want to make sure I distinguish between trusted AI and governance. Trusted AI is the work we do to develop new techniques, new AI techniques that are robust, fair, explainable and transparent. It's the algorithmic side of innovation to build trusted AI. Governance is how we take those innovations and operationalize them so that we are managing the risk and compliance from an enterprise perspective. In trusted AI, we have the most comprehensive agenda across every aspect of trusted AI. Fairness, explainability, robustness and transparency are hundreds of scientific publications and leading edge results. Along with all the open source toolkits that we have released in this space are a testament to the rich agenda we have. On governance, we are pioneering an approach around this notion of fact sheets to bring automation to governance. And the idea is to automatically collect facts about the model. When was the model created? What was it tested on? What was the quality of the model? The bias on the data and the bias in the model after it was trained. These facts are captured throughout the lifecycle automatically, and then these model facts can then be presented using fact sheets to different stakeholders who are involved in the governance, whether that's a data scientist, an application developer, a risk and compliance officer, or the business owner. In summary, stepping back, our approach around AI for AI is a holistic approach that is looking at all aspects of the lifecycle and the governance of AI and bringing AI-assisted tools and automation to accelerate the journey of our clients in driving AI into their enterprises. We talked about using automation to help scale AI in the enterprise. Let's now turn to the other scaling challenge, which is a challenge of scaling compute. As this chart tells you, the compute requirements for the biggest AI models today is doubling every three and a half months. Now, these models are being created for beating benchmarks. The average enterprise use case is not going to directly use these models. However, this trajectory around compute usage is illustrative of the fact that we will need to address these compute challenge to be able to drive AI adoption at scale. Addressing this compute challenge will require us to go beyond running AI workloads in today's hardware of CPUs and GPUs. It'll require us to create hardware that is purpose built for the AI workloads of the future. And it is to this end that we have created the IBM AI Hardware Research Center in partnership with the state of New York. At the center, we are driving a holistic program and agenda in every aspect of innovation, all the way from new materials to new architectures, designs, chips and frameworks, all the way up to the software. We have established a multi-year roadmap that consists of innovation both in digital AI course and analog AI. In digital AI, we are driving an approach based on reduced precision computing and already driving exciting results in that space. In 2018, our researchers showed for the first time ever the ability to simply use eight bits to train deep neural networks. In 2019, our scientists were able to demonstrate using simply two bits to do inference over neural networks. And more recently, we have expanded our eight-bit training architecture to support a much larger array of neural network architectures. Longer term, we are going beyond digital into analog technologies to be able to use in-memory computing to drive even deeper exponential gains in power and performance. In closing, what's next in AI is the journey from today's narrow AI to the more fluid AI of tomorrow. And at IBM Research, we firmly believe that this journey to fluid AI will be powered by Neurosymbolic and operationalized by AI engineering. A number of the examples we talked today were from a business context, but we believe that AI can address some of the biggest problems facing humanity. Whether it is the future of work, the future of health, the future of climate, the future of how we as a society deal with pandemics, all of these are big problems where AI has a fundamental role to play in shaping the arc of how humanity addresses these problems. At IBM Research, we are very excited by the possibility of applying and advancing AI to address all of these problems. Thank you for your time and attention.