 Hi, everyone. And thank you for joining us today. I'm Bethany Hill McCarthy from IBM Research Communications. We're looking forward to a lively discussion today about the latest research in AI automation. First, a little bit of background info. Today we see that IT is going through a fundamental shift that increasingly requires CIOs to act as a partner in enabling business transformation through AI. As a result, we know that IT will need to become more scalable and adaptable. And that's causing enterprises to move towards a hybrid cloud IT architecture. And we know that AI is going to play a fundamental role in this, especially when applied to code, the language of machines. So we're excited to talk about this and share more with you today. But before we get started, I want to take a minute to run through a couple of quick items and reminders for today's discussion. First, thank you for joining us. I want to acknowledge that this virtual format was established in the wake of the COVID-19 pandemic. And we hope each of you is safe and healthy wherever you are. Second, there are a few links I want to highlight in the top left corner of your screen. One is a blog post about AI innovations for hybrid cloud. Another is a blog post about AI for IT operations. And the final is a demo video of Mono to Micro, a tool that we'll talk a little bit more about today. Throughout the event, you will have the opportunity to ask questions either for all the panelists or to a specific person, just indicate who the question is for. You should see a little Q&A window on your screen and that's where you can submit your questions. We'll save the last 10 minutes of today for questions and try to get through as many as we can. For any additional questions or follow-ups that we don't get to, just let the IBM representative you've been working with know and we'll work to connect you with the appropriate panelists. We also, in the next day or two, we'll have a replay of this event. And following this discussion, we'll share that link with each of you and other relevant resources. With that, I wanna now turn it over to today's moderator, Pat Morehead, founder, president, and general owner of More Insights and Strategy. Pat, the floor is yours. And I think what Pat's trying to say as he's on mute is he wants to thank all of you for coming. We have a great panel today of three different panelists. Can you hear me now, Pat? Thank you. Yeah, I had the double mute going, my apologies, but super excited and I see a lot of my friends out there and it's just wonderful. And I'm the only person who's not a PhD on this panel. So I am blessed to be sitting with such a smart crowd. So great topic. We do a lot of research on AI and More Insights and Strategy. And this is essentially what if AI can converse with machines? A lot of consumers are already using a lot of chat-based services out there. And we've seen a lot of machine learning be put into commercial use as well. So we're gonna be talking about that topic and also how it intersects with the private cloud. But what I'd like to do is have the panelists introduce themselves. Let's start off with Dr. Nick Fuller. Thank you, Pat. Good morning, good evening, wherever you are. My name is Nick Fuller. I have the privilege to lead a global team at IBM Research in this role as Director of Hybrid Cloud Services. We're responsible for delivering innovation to differentiate our hybrid cloud platform. Excellent. Let's go off to Dr. Munindar Singh. Hi, everyone. Hope you can hear me. Yes, I'm Munindar Singh. I'm a Professor of Computer Science at NC State University. I've been working on AI for a long time since my dissertation. My interest has been in application of AI in businesses and especially I looked at challenges that arise from services, processes, contracts, stepping back a little more on versions of accountability. And in the last few years, I've also looked at ethics more and more closely. Excellent. Let's move on to Dr. Bashaki Ray. Hello, everyone. Thanks for coming to this panel. I am Bashaki Ray. I am an Assistant Professor at Columbia University. So my main topic of research is how we can I mean, how we can apply machine learning to model the code behavior so that we can automate a lot of software engineering or program analysis related tasks and can help developers in many ways. Great stuff. So why don't we dive right in here? And probably the best place to start would be for a level set. And maybe we can talk about where we are right now in research on this topic. And maybe we can get all the panelists to talk through this one, that maybe we start off with Bashaki. All right, yeah. So I can talk, I mean, this is a very large and diverse topic. So let me kind of go in detail what I do in my research group. So, you know, with the advent of, say, Github, Bitbucket, et cetera, basically open source software, there are a huge amount of software data is available. So right now you can treat code as data, like code as your data. And of course these code data, I mean, we call it big code as opposed to big data. And this big code has their own properties like they have more structure than say natural language like English. They have different semantic properties, et cetera. So we are building specialized model, deep learning models to learn these properties from code and leverage that to automate many different tasks. Say one, I mean, the particular task I am working on and I actively collaborate with IBM is automatically detecting vulnerabilities. So here the idea is that you can learn from the code and as well as previous vulnerabilities and then use that knowledge to detect future vulnerabilities automatically. And I mean, again, this topic is very diverse. You can do many other tasks, like say you can even automate code writing to some extent. You can synthesize small programs, not like of course not a bigger, like I cannot, with current technology, I cannot synthesize a whole big code, but I can synthesize very small functions, things like that. So yeah, I mean, this whole area is very exciting and then there is a lot of research opportunity in this direction. Yeah, what a great way to start out. Thank you for such a comprehensive answer here. Maybe we give the same question to Munindar. Thanks. Yes, I would say that, you know, research is at a stage where I think we've made enough progress that we should be interested, like it's on the cusp of greatness, I would say, that we are able to understand code more and more than we ever were, more and more effectively. There are ways, and some of them I'm working with colleagues that I've been in research on. They are about abstracting services from existing, legacy kinds of processes and software. So there are techniques that come into play and these techniques apply, I think the interesting thing there is that they apply not only AI techniques, but they combine those AI techniques with an understanding of traditional sort of computing semantics that you need to understand, that you use as a basis for software. Nick, do you have anything to add to this? I hope you do. Absolutely, Pat, thank you. So pulling together the pieces that you got already from professors Ray and Singh. The novelty here is the following. AI applied to machine language is the area where essentially accelerating, which underpins information technology. And it's not as if there hasn't been AI applied to information technology before. In fact, in Pat's intro, he touched on bots, for example. Bots today provide technical support for a range of products. IBM and others have majored in this space for some time. But when you look at what is critical to enterprises, traditional enterprises going forward, it's the ability to unlock the value of cloud for their mission-critical workloads. That's a challenge. The fundamental work we're doing in collaboration with professors Ray, Singh, and many others in academia out there helps us to advance the knowledge in that space to ultimately get mission-critical workloads to hybrid cloud. And that's the cusp and synergy of hybrid cloud and AI. And it's an area of pursuit for us that drives tremendous excitement all the way up to our CEO. Yeah, it is incredible that what, you know, this area AI in general is about 50 years old. And, you know, we've had a lot of cycles in and out, but there are real companies doing real workloads, solving real problems out there. And, you know, getting computers to talk to each other, to some might sound easy, but it can't, there's no way it can be that easy. And I'm curious, what are some of the challenges that you're facing right now when you're teaching software to communicate with other software? And maybe we can start with Munindar. Yeah, so, yeah, I mean, at one level, you know, getting software to talk to other software, as you say, you know, seem to be the simplest possible thing. But the challenges, they apply at various levels. And oddly enough, these are challenges that people have known for decades, you know, from the very early days of computing. So there was, you know, people that came up with the abstraction of a subroutine, you know, they had to figure out how the subroutine could be called. And there was a lot of debate about, you know, how you could do it in a way that would be reusable, so that you could write your sub, one person could write a subroutine that another person could use and make sense of. At the same time, or a little later, there was this effort with, you know, process integration and manufacturing, I think, general models and those kinds of companies. They wanted to share data with their partners and they came up with approaches to be able to understand the data. Like one party produces a data that another can understand. In the very early days, they were more concerned with agreeing on even the encoding standards. Like not everybody used ASCII or, you know, there was the UTF and so forth. Later they realized that that was just not good enough. Just agreeing on the encoding wasn't enough. You needed to have some more structure to the data to be able to understand it better. And still later, but I think the data part of it is, I would say well understood now. Still later, there was an idea that we should reconcile the processes. And that part is less well understood. And if you abstract further, there's an idea that there's an understanding of contracts. Like, you know, one piece of software written by one person would interpret a request from another piece of software written by a second person in a way that makes sense. And so in other words, it comes down to the challenges of dealing with heterogeneity. That these are, they are written by different people and there must be a common language to make it happen. But it turns out the common language is not trivial. It's not just agreeing on the letters you're using or the words you're using. But you have to think of the, you know, not just the sentences, but think of how the sentences will be interpreted. So we have, I think several pieces of what we need to apply in the present setting are well understood but several others are becoming understood. And if you think of a concrete question, so instead of just, you know, software talking to software, let's say you want to convert your legacy applications into services, you'd want services being reused. Otherwise, why have services? And that means you have to contend with all these challenges about how one service is going to interpret the information passed to it by another service. So I could go on, but I think I should pause here and see more questions later. So this is very much not easy. And it seems like anything that has to do with AI or machine learning isn't easy. Hence we have three PhDs here right now. So not to make something that's already complex, more complex, but I do think there is a role in the cloud. And I think the notion of the public cloud was very popular for around a decade. And I like to say that we used to be in this drunken sailor mode in the industry that says it has to be in the private cloud, even though most of the data was not, sorry, in the public cloud, but most of the data was on-prem. And then just recently, the industry, everybody has pretty much agreed that the future of IT is the hybrid cloud and multi-cloud, where you have different cloud operations on-prem, you have a little bit of it in public, you have some even on the edge. So let me pose this question for Nick. Nick, what is the role of the hybrid cloud in this evolution? Yeah, thank you, Pat. So you point and touch on a very important issue, namely the issue of public cloud having a place going back 10 years ago, but the acceptance of hybrid cloud being key going forward. And the reason for that is 70% or so of traditional enterprises are looking at more than one public cloud vendor to move their workloads for a variety of reasons, for openness, for the ability to integrate with their on-premise applications and so on. Additionally, when you look at the various industry verticals, there are key advantages for regulated industries, for example, be it financial, be it telecom and so on, as far as moving their workloads to multiple public clouds to unlock that value, the agility value, the availability value, the resiliency value, and the ability to connect with other services. And last but not least, remember, 20%, only 20% have moved to the public cloud already of those mission-critical workloads. Now that's the business point of view. When you combine that with the technology pieces that are essential, Professor Singh touched on this before, modernizing workloads, whether that be replatforming them, refactoring them, key challenges underpinned by the AI for code examples given at the start of the webcasts. That journey to take workloads to cloud from on-premise through the various phases, from advise, move, build, to manage, requires sophisticated tooling, tooling with intelligence, what we collectively call AI-infused automation. So that's key. And then the hybrid cloud platform, which naturally comes out of this, an open shift, of course, that's why we made that huge investment in Red Hat has to bring intrinsic capabilities to bring those two together. The tooling and the platform through APIs will close underlight. And when you combine all of that, the technology pieces and the business pieces, that's the $1.2 trillion opportunity in front of us. Yeah, I like the way you laid that out. And the good news is exactly as I see the world as well, we were one of the first analyst firms to talk about the hybrid cloud, about a decade ago, and people thought we were crazy, but here we are, it's a reality, and it's what everybody is doing right now. So let's get back to AI. And by Shaki, this question is for you. What is the role of AI in this evolution with a specific focus on machine language? You know, this machine language is very diverse. Like the kind of applications, Nick and Malinda were talking about, you can think about those applications at a very high level, like where you are talking about different systems, different services, at a very high level macro granularity. And again, many other researchers like me, we kind of look at the low level where we go towards the source code or even execution trace, stack trace, et cetera. And I believe that the opportunity is huge because it has this whole code spectrum, like from like configuration space, this refactoring legacy services to modernize them to like, to the lowest end to machine language. And I think at which granularity you look at this machine language, they have very different properties and you have to basically come up with different kind of AI modeling or AI technique. Those are like your very much application specific and at what granularity you are applying. And there is a huge potential for innovation. We're not hearing you at the moment. That double mute, my apologies. Let's move to the next question. It's like tying your shoelaces twice. I won't do that again, I promise. So, Munindar, this question is for you. We have legacy software that's useful, obviously. There are some financial institutions that have 50,000 apps that have been built over the last 50 years. It's just incredible, but that doesn't necessarily make them current. How can AI help modernize software to what I'll call the cloud age? Yeah, that's a good question. And I think your analogy with the shoelaces is right on because you really do need to maybe untie the shoelaces twice to make it work. So, there is the, I think the kind of work that Beshakti and her community pursue, which is that they're looking at so scored line by line and I think that's essential. And then there's the other end of it that if you're trying to modernize legacy applications you have to understand the business processes where they fit in also. Otherwise, how could you make sense of what's going down in the code if you didn't know where you were headed? And I think AI can apply in both of those aspects. There is, as I said earlier in response to your previous question about getting software to communicate with other software, it comes down to meaning, right? At some level, it's the level of the, it's the interpretation of this communication that we need. But that interpretation partly relies on like the words you are using, which is kind of the, you could say the machine language aspect of it. And partly relies on the context of where they fit in, like what's the previous discussion or where it would, like if I use the pronoun, how do you know what I'm talking about if you didn't know what I talked about earlier. So, I would use that analogy and say that we need AI at both of these levels. We need AI to understand the business processes at a general level in a way that produces representations that can advise the low level analysis. And then similarly from the code analysis, we need to have abstractions that has some bearing on the processes that we're trying to conduct. And if you can do those together in a nice way, then we have a hope of converting these monolithic applications into sort of elegant systems of microservices. If you tried to do only one of them, then we do have the shoelace in the wrong place, as you said. Is I'm pretty sure that this is beyond theory at this point. Are there any real examples of utilities that are out there that are currently doing this? Yeah, so I think that there are pieces of them. So, I know that on the business process side, there's lots of discussion in the finance industry, for example, in healthcare, things are messed up. I think on the, but as far as I know, they haven't done too much of the code analysis there. Then my colleagues at IBM, they are looking at the code analysis for a long time and more and more recently. So, I think in a way, you would say that both of these aspects have been addressed, but maybe they have not yet been addressed in the unified manner and maybe the current efforts would take care of that. Yeah, it's pretty cool. I've seen some people in the financial industry do an analysis of their code and it actually finds dead links that aren't ever used. And it's not as easy as just cutting that bad code out, but it certainly tells you what you have to transform and what you don't have to transform. And I find that a really cool. So, Nick, I mean, yeah. Sorry, just to finish the thought, I mean, so in the ADSN and that dead code and those bad legacy code is reflected in horrible processes. Like it might take them a month to handle low transaction, which literally should not take any more than an hour, you would think. And so there are other questions of that. That's cool. So I'm gonna turn it back to Nick. This question is for you. So we talked about the synergy between the hybrid cloud and AI. And what are some of notable successes that you've seen emerge out there? Yeah, thanks, Pat. So one of the things we did going back to think symposium earlier this year was to launch this AI for IT initiative, specifically focused on building innovation, AI infused automation tooling, spanning the full life cycle, application life cycle management of workload journey to cloud. And again, with major focus of course on mission critical workloads. And so when you look at the fundamental pieces, ultimately underpinned by AI for code, part of that allowed us to make announcements with respect to automation tooling for modernization. So if you look at one of the links, Bethany referenced at the beginning of the webcast, here we have beta product known as monitor micro, which looks at code analysis, takes in additional inputs and is able to refactor applications, identifying dead code as well. Additionally, with our services units, we launched application modernization accelerator with AI, a toolkit made up of a suite of different tools that ultimately take you through that modernization journey from the advice phase, identifying what makes sense to containerize and what doesn't. And once you've identified what makes sense to containerize, how you actually go about that journey, what's the GPS to get you there, given that each client and each application will have a unique journey, so to speak. And then finally, what are the microservices that you recommend and how you can now construct that in an automatic fashion? That toolkit actually earlier this quarter, going back to 3Q, various pieces were released for general availability. The other big announcement we made back then was around application availability. Getting to cloud is one thing, wherever applications reside, they need to be available. Outages cost time, they certainly cost money, their estimates, major outages can go all the way up to half a million or more. And so we launched what is known as Watson AIOps, product for addressing incident management and outages related to changes made to applications. And that capability as well, that tool as well was also G8 earlier this quarter. So these are some notable advances, if you will, that we've made in this space overall AI for IT and we continue to ramp up as we look forward to the future. Well, I tell you, Nick, I remember seeing that at Think and it just literally blew my mind, some demos of that tool, particularly the dead code and what should be containerized and what shouldn't. It was pretty awesome. So just a reminder to our attendees, you can ask your questions now in the chat box. We don't have a chat bot, maybe we should. So at the very end of the show, we can take them. So let's move to security right now. It's funny, I kind of think is that as spy versus spy, and we have now niche in state budgets to go in and hack people, you know, hacking reached prime time when there's actually hacking as a service out there, it can literally go into the dark web and put an order in, just like you might, you know, fire up a public cloud. It's pretty crazy. So what I'd like to do is ask by Shaki, how big of a concern is security when you have software automatically talking to and working with other software? Can you talk a little bit about maybe some of your work on security vulnerabilities and risk analysis and vulnerability analysis? Right. So, I mean, security is always a concern whether you apply AI or not. I think the role of, I mean, the role of AI is kind of too false here, at least the way I see it. First, when we think about traditional applications like legacy applications, et cetera, you know, there is usually a repetitive patterns in the way hackers hack your software. And it is, and that pattern is not always obvious. It's noisy, et cetera, et cetera. But with this advent of big code, now we can learn those patterns, like what are the potential context in the code or in your environment that make it prone to security attacker. And we are now building model to automatically identify those security critical region, you know, like which are prone to attacker. And so far the results look very promising. So I think AI will play, and AI already started playing and it will also play a very important role in identifying those cases, I mean, which are prone to attack. Saying that another problem we are, I think AI can solve like currently, this AI based software is coming up in cloud and also, so there it comes with another kind of security threat. So there you can say AI is communicating with other models, not only traditional software. And I mean, I think this new kind of AI based software will be also another point of our discussion when we talk about security. Yeah, Nick, IBM is very entrenched in the security areas. In fact, I think by a lot of estimates, IBM is the number, the largest security company on the planet. I'm sure you have some comments on this. Absolutely. In fact, when you look at our 2019 annual report, we talked about, we mentioned in that report looking at over 70 billion events a day, right? In that report. It's interesting because the security and compliance concerns for traditional enterprises don't go away. In fact, they go up as they move to cloud. And so we're approaching this from a holistic point of view, right? There's the fundamental, absolutely critical work that we're doing both in research with Bishaki's team and Bishaki is also conducting as well. And I'm sure many others from a build point of view, right? So clearly what happens at build time is absolutely critical. If you can minimize what's deployed, that's fantastic. But once actually deployed, right? There are concerns there as well. In fact, we did a cost of a data breach study not too long ago, and that study revealed from 11 vulnerability detections, cost organizations upward of 57 billion, a huge total when you think about it, given that it's only 11, right? We haven't gotten into the hundreds or thousands, so to speak. And so there's critical work we're doing at runtime as well in IBM research coupled with our security business unit looking at going beyond what NIST, National Institute of Standards, Common Vulnerability Scoring System does, and namely the degree to which a vulnerability can be weaponized by a malicious attacker. You talk about going to hacker sites on the dark web and prescribing what you want. The idea here with these services in X-Force Red is to programmatically enable us to determine not only the weaponization of those vulnerabilities, but the skill level a hacker would need to have in order to weaponize a given vulnerability. And when you put these together capabilities that we recently released for general availability as well, it allows us to get to a level of scale because these things were being handled manually before, and clearly that doesn't get you to scale. So with the programmatic enablement in the X-Force Red suite of capabilities, you can imagine how we can finally allow organizations to prioritize what they look for, what they addressed first from a vulnerability standpoint to minimize security disruptions. Gosh, I love that. I talked about spy versus spy, and one of the biggest challenges that IT has today is fielding the amount of red alerts. And when you have hackers using machine learning, really the only thing you can do is have machine learning on the other side to react to that. It's fascinating. So let's move into our next question here. And maybe by shock, I can hit you up with this if this is okay. What are the biggest misperceptions you see around the capabilities of these types of automations? How should the average person think about this and the role of AI of automation and code? So, I mean, again, I think one thing I have noticed when we are building these models, most of these models are very application specific. So if I say build a model with Linux operating systems, Linux operating systems, it might not work very well with the Mac operating systems. So basically, I think that this generalizability of the models are still lacking. These are quite environment specific and very much tied to your applications. So that is, I think one major, I mean, if that we want to adopt AI in a much larger context for the whole tool chain, maybe, and I mean, people already started talking about it, but maybe a more generic way to represent this whole tool chain in a machine learning context would be necessary. Excellent, thank you. And sorry to put you on the spot there, but maybe we can go Meninder, do you have any thoughts on this? Yeah, a couple of thoughts. So I also wanted to comment on the security thing if I mean briefly, since we talked about it. I think modernization provides, you know, for security, it provides us both an opportunity and a challenge. The opportunities, I think with monolithic systems is typically, once they have breached, they have breached. I think with modernized service-based systems, you have the hope that it could be an attacker and you could still function while there's an attacker. So you could achieve something that's resilient. And it relates also to the misperception because what happens through applying AI is sometimes that the AI tools themselves are more complex. So your end product may not be complex where the tools are, and sometimes people don't realize, first of all, that there could be a problem in the tools themselves. Like there could be security problems induced by the tools. And secondly, sometimes they don't realize, I think that Echo's Bejaki's comment that this is very much an application-specific or a domain-specific exercise that although the tool will help you, it's not going to be like push a button and you get an answer out of it. And I noticed that sometimes with people I've interacted with who are not, I guess they're not computer scientists or although they've used computing a lot, they sometimes have this view, I think a big misperception that AI is a bit like magic to them. Like the thing that really, here's a problem and just the AI will solve it. And I think it never quite turns out like that because you have to think about the representations you want to build for the AI to run. And there are advances which reduce the effort that you have in the representations, but that misperception still remains and the work is never quite as automated as you would hope it should be, hope it could be, I guess. So maybe we can wrap up, Nick. I'm sure you've got a lot to say on this topic. I saw automation and Bells went off, I'm sure. Yeah, absolutely Pat. So I want to echo what Linda said a second ago, right? AI isn't a hammer in search of a nail, right? It's more a question of where we see opportunity like insecurity mentioned a few minutes ago, like in modernization and specific aspects of modernization as well. Now, when we, and I use the phrase as well, AI infused automation tooling, one potential conclusion that one might derive, technical or not, is that okay, so there's a replacement essentially of the practitioner that's happening with such tooling and that couldn't be further from the truth. In fact, if you go back to the days of Jeopardy when we first introduced the notion of AI with Jeopardy, we talked about AI augmenting what a practitioner has to do and it's very much applicable in the hybrid cloud context. And I'll use just modernization as an example which we've talked about as well. When we go through that refactorization process, it's very much the case, given the uniqueness of every application, given the uniqueness of every industry vertical, that there may be the need for some customization for the architect to add such customization based on use cases and so on. No different as well from the refactoring of the application from a gap analysis point of view. We need the practitioner, we need the architect to validate that and ultimately the functionality of the application. So just really wanna reinforce the point that it's an augmentation approach. Certainly the tools are absolutely critical, they're complex as mentioned before, but they get us to scale. They eliminate the need for the practitioner to conduct mundane manual tasks which ultimately takes a long time and impedes scalability. So they get us to scale and upskills in many ways the practitioner by working in conjunction with the tooling. That's awesome. So what I'd like to do is close out the formal portion of this and move into Q and A. I can't believe we've been chatting for 40 minutes here, but this is wonderful. So we do have questions piling up here and if you are in the press, please add your questions in here. So let me see, the first question we have is, and this one's for Nick. Nick, do you see specific industries being more aggressive than others to mature and adopt AI for IT? Other observations on global geography if there are regions pushing for faster adoption? Thank you, Pat. Excellent question. The regulated industries actually are pushing quite aggressively. Interestingly from a financial services point of view as well. And I think if you look back at the history of this from a sensitivity point of view, from a security compliance point of view, the maturity of cloud has grown. They recognize that they're staffed with talented professionals as well who've kept pace with what has happened in the industry as cloud has grown and matured. But they also understand that with cloud they can unlock the true agility and the ability to augment their applications once refactored and restructured with the various services that cloud providers, major vendors, the all of the hyperscalers, IBM and others provide. So we're actually seeing quite an aggressive push from financial for sure. Other industries as well, but I will highlight financial in particular. No, that's good. And if I were gonna answer that question, it would have been the same thing. Financial industries really seem to be trailblazers in machine learning partially because they have a need for it, a big need for it. But they also have research groups, very large research groups that are experimenting and doing a lot of things. So we have another question here that's related to the rise of edge devices. And the question is, is do these innovations in AI contribute to the rise of edge devices? And if, how? And maybe by Shaki, we can start with you. Put you on the spot here. So again, I think this edge devices and IOTs, I mean, these are like huge scope for AI automation. One of the main reason, I mean, is that these devices themselves generate lot and lot of data and constantly populate data, right? So how to extract information from their output and then is like very much in the scope of data analytics and then this whole tool changing and how to basically smartly process those data, right? And then do applications on top of it. So yeah, I mean, another very much software engineering applications regarding these devices is that they are kind of depend a lot. Their development depends a lot of frameworks. And again, you have to customize. So there is this common framework part and for individual application or individual hardware, you need customization. So this blend between framework-based development and customization, how AI can take an important part, there is a very interesting thing to think about. So Pat, can I add to that? Yeah, so I think edge devices and especially IOT and these things, I think they bring out an aspect of AI which is under play today, which is that you're dealing with autonomous parties and a lot of what we are thinking today is good for where there's one locus of control, but when they have thousands of entities computing at the same time and autonomously, it throws up huge near challenges and they are certainly worth pursuing. Yeah, it's interesting if you look at the amount of devices or just hardware that has, let's say neural networks running on it, smartphones have more concurrent neural networks running on it than any other edge device that's out there. And I'd even posit that AI in the hybrid cloud is truly enabling all of these edge devices. So it's kind of a chicken or the egg type of scenario and they play off of each other and we have this constant push and pull now between where are those AI's done, right? Are they done close to the edge? Are they done up in the cloud or some intermediary step? What's happening in practice now is they all are. So let me move on to another question that I got in and because quite frankly, everybody wants to hear from the PhDs not from the analyst pundit here. So AI research is constantly evolving. Looking back, how have you seen the focus of your research change over time? And Meninder, why don't I start with you? Yes, there have been changes. Of course, when I was a PhD student, we were, well, neural networks have been around since the 50s, long before my time, but they had come in and fallen out of favor several times already by the time I was a student. At the time I was a student, not too much in favor partly because I think, obviously the computing power didn't exist. We didn't have the data to use them. And also the mathematics wasn't quite well understood. So people didn't really know what hard to deal with them. So the biggest change since in the last decade or so has been the revival of neural networks. And what I see now is, so starting from this more knowledge-based approaches to the neural network style approaches. And what I see coming down the pike now is that people want to reconcile the tool that you have these large, you have these current neural networks which will take millions of inputs to learn something and you compare them to a human baby who just sees here's three words and they're able to construct new sentences that they've never heard before. So there's a challenge about how to reconcile them, how to understand the structure of meaning properly. So those are the kinds of emerging tasks that are coming up. Yes, lots of ideas. Nick, what are you seeing from your side? Yeah, interestingly, I gave a talk to, internal talk to worldwide architects in IBM. And one of the things I touched on was, if you go back to say 10 years ago, there's work that we were doing on identifying issues associated with applications running on premise and also the approach that we took there was heavily around what in AI we refer to as narrow AI. And if you think of the AI taxonomy as narrow, broad and then ultimately general, narrow AI is essentially underpinned, of course, by deep learning, but with large amounts of labeled data. So a supervised approach. As you get into broader AI concepts like learning from less data, concepts like trust, a huge issue, concepts like being able to provide explainability begin to emerge. And a lot of what we've done has dovetailed in that direction. The security work highlighted for falls into that category. Similarly, some of the modernization work that we're doing as well, likewise around IT operations. And so that to me is the biggest change. Practitioners want to really understand why they're being told to do XYZ, right? If my job is on the line for making a decision with respect to either modernizing or being a security focal or being an SRE, reliability engineer to keep an application up. And the AI thing is telling me to do this, but I don't understand why that's a problem. So explainability helps in that augmentation process. That's a huge deal. That's the biggest change I would say that we've seen in that progression as AI has evolved, moving towards a broader AI approach as applied to this notion of AI for IT. That's great insights there, Nick. Thank you. And here's the next question. And Nick, this sounds like a question for you, but I'd like for everybody to chime in. This is officially our last question. So Nick, what's the one thing CIO should know about how this research is shaping future commercial offerings? Yeah, so to me, this comes back to me. So to me, this comes back to a couple of fundamental things, right? Research doesn't reside in the lab only anymore. In fact, this panel, pleasure to be here with my esteemed panelists today is a representation of that partnerships across the academy with industrial labs and taking it a step further by co-creating with clients. No longer can we subscribe to the notion of building they will come, right? So you build something, you continue to evolve it. And then, oh, by the way, let's push it out to the market. No, it's a very different approach. And CIOs should take a huge solace in the fact that this is exactly what we're doing, especially from an AI for IT, the intersection of hybrid about an AI. So as we build some initial MVP, we touched on this and each of the different examples given the idea is to pilot, right? And we take a look at your applications, if when you look at the three verticals, this is exactly what we're doing. Ultimately, get to, since that makes sense, I must see IO point of view and panel, from a scalability point of view. Now, I appreciate that. And I will give Dr. Singh and Dr. Ray the last word if you'd like to caboose off of this last question. Okay. Thanks. What I would think is that at least the way I conceive of a CIO, not having been a CIO, I would say that their purpose is to facilitate business, right? So they're a CIO of a company. They should be thinking about how to realize business processes, how to preserve, ensure security, preserve the privacy and confidentiality of the business and the stakeholders. I think what happens a lot of the time is that they end up contending with, what to my mind would seem to be trivial concern. And I think what this technology offers is that it relieves them from those trivial concerns and gives them the opportunity to think about the business processes that they want to achieve. And then it brings up, to my mind, to add to some of the things that Nick said earlier, that maybe they can worry about concerns such as accountability. They can worry about trust within the organization and across the organization. Depending on the domain, maybe they can concern, be concerned with safety and those kinds of bigger concerns that they should be providing to the rest of the firm. I think it offers them the opportunity to do that. Excellent, great insight. So with that, I'm gonna call this a day. I'd like to thank Dr. Fuller, Dr. Singh and Dr. Ray for spending this time. And finally, I'd like to thank IBM and everybody from the press community who came in and attended. So thank you very much and have a great day. Thank you. Thank you. Thanks.